SOFTWARE ENGINEERING
Spiral Model
The spiral model, initially proposed by Barry Boehm in 1986, It implements the
potential for rapid development of new versions of the software. Using the
spiral model, the software is developed in a series of incremental releases.
During the early iterations, the additional release may be a paper model or
prototype. During later iterations, more and more complete versions of the
engineered system are [Link] are not [Link] the project is big then
the phases will increase gradually.
Objective determination and identify alternative solutions: Each cycle in the
spiral starts with the identification of purpose for that cycle, the various
alternatives that are possible for achieving the targets, and the constraints that
exist.
Identify and resolve Risk : The next phase in the cycle is to calculate these
various alternatives based on the goals and constraints. The focus of evaluation
in this stage is located on the risk perception for the project.
Develop next version of the product : The next phase is to develop strategies
that resolve uncertainties and risks. This process may include activities such as
benchmarking, simulation, and prototyping.
Review and plan for next phase: Finally, the next step is planned. The project
is reviewed, and a choice made whether to continue with a further period of the
spiral. If it is determined to keep, plans are drawn up for the next step of the
project.
Advantages of Spiral Model:
. Software is produced early in the software life cycle.
. Risk handling is one of important advantages of the Spiral model, it is best
development model to follow due to the risk analysis and risk handling at
every phase.
. Flexibility in requirements. In this model, we can easily change requirements
at later phases and can be incorporated accurately. Also, additional
Functionality can be added at a later date.
. It is good for large and complex projects.
. It is good for customer satisfaction. We can involve customers in the
development of products at early phase of the software development. Also,
software is produced early in the software life cycle.
. Strong approval and documentation control.
. It is suitable for high risk projects, where business needs may be unstable.
A highly customized product can be developed using this.
Disadvantages of Spiral Model:
. It is not suitable for small projects as it is expensive.
. It is much more complex than other SDLC models. Process is complex.
. Too much dependable on Risk Analysis and requires highly specific
expertise.
. Difficulty in time management. As the number of phases is unknown at the
start of the project, so time estimation is very difficult.
. Spiral may go on indefinitely.
. End of the project may not be known early.
. It is not suitable for low risk projects.
. May be hard to define objective, verifiable milestones. Large numbers of
intermediate stages require excessive documentation.
_____________________________________________________________________
_____________________________________________
White Box testing :
The term 'white box' is used because of the internal perspective of the system.
The clear box or white box, or transparent box name denotes the ability to
see through the software's outer shell into its inner [Link] is performed by
Developers, and then the software will be sent to the testing team, where they
perform black-box testing.
The
main objective of white-box testing is to test the application's infrastructure. It
is done at lower levels, as it includes unit testing and integration testing. It
requires programming knowledge, as it majorly focuses on code structure,
paths, conditions, and branches of a program or software. The primary goal of
white-box testing is to focus on the flow of inputs and outputs through the
software and strengthening the security of the software.
Black Box testing :
The primary source of black-box testing is a specification of requirements that
are stated by the customer. It is another type of manual testing. It is a software
testing technique that examines the functionality of the software without
knowing its internal structure or coding. It does not require programming
knowledge of the software. All test cases are designed by considering the input
and output of a particular function. In this testing, the test engineer analyses
the software against requirements, identifies the defects or bugs, and sends it
back to the development team.
In this method, the tester selects a function and gives
input value to examine its functionality, and checks whether the function is
giving the expected output or not. If the function produces the correct output,
then it is passed in testing, otherwise [Link] box testing is less exhaustive
than White Box and Grey Box testing methods. It is the least time-consuming
process among all the testing processes. The main objective of implementing
black box testing is to specify the business needs or the customer's
requirements.
In other words, we can say that black box testing
is a process of checking the functionality of an application as per the
customer's requirement. Mainly, there are three types of black-box testing:
functional testing, Non-Functional testing, and Regression testing. Its main
objective is to specify the business needs or the customer's requirements.
[Link]. On the basis of Black Box White Box
testing testing
1. Basic It is a software In white-box
t e s t i n g testing, the
technique that i n t e r n a l
ex a m i n e s t h e structure of the
functionality of software is
s o f t w a r e known to the
without knowing tester.
its internal
structure or
coding.
2. Also known as Black Box It is also known
Testing is also as structural
known as testin g, clear
functional box testing,
its internal
structure or
coding.
2. Also known as Black Box It is also known
Testing is also as structural
known as testin g, clear
functional box testing,
testing, data- code-based
driven testing, testing, and
and closed-box transparent
testing. testing.
3. Programming I n b l a c k - b o x In white-box
knowledge testing, there is testing, there is
l e s s a requirement of
programming programming
knowledge is knowledge.
required.
4. A l g o r i t h m It is not well It is well suitable
testing suitable for a n d
a l g o r i t h m recommended
testing. fo r a l g o r i t h m
testing.
5. Usage It is done at It is done at
higher levels of lower levels of
testing that are testing that are
system testing unit testing and
and acceptance integration
testing. testing.
6. Automation It is hard to It is easy to
automate black- automate the
box testing due white box
to the testing.
dependency of
testers and
programmers on
each other.
7. Tested by It is mainly It is mainly
per for med by per for med by
t h e s o f t w a r e developers.
testers.
8. T i m e - It is less time- It is more time-
consuming consuming. In consuming. It
Black box takes a long
testing, time time to design
consumption test cases due
depends upon to lengthy code.
the availability
of the functional
specifications.
per for med by per for med by
t h e s o f t w a r e developers.
testers.
8. T i m e - It is less time- It is more time-
consuming consuming. In consuming. It
Black box takes a long
testing, time time to design
consumption test cases due
depends upon to lengthy code.
the availability
of the functional
specifications.
9. Base of testing The base of this The base of this
testing is testing is coding
e x t e r n a l which is
expectations. responsible for
internal working.
10. Exhaustive It is less It is more
exhaustive than exhaustive than
White Box Black Box
testing. testing.
11. Implementatio I n b l a c k - b o x In white-box
n knowledge testing, there is testing, there is
n o a requirement of
implementation implementation
knowledge is knowledge.
required.
12. Aim The main Its main
objective of objective is to
implementing check the code
black box quality.
testing is to
specify the
business needs
or the
customer's
requirements.
13. D e f e c t I n b l a c k- b ox Whereas, in
detection testing, defects white box
are identified testing, there is
once the code is a possibility of
ready. early detection
of defects.
14. Types Mainly, there are The types of
three types of white box
black-box te st i n g a re –
t e s t i n g : Path testing,
functional L o o p
testing, Non- testing, and
once the code is a possibility of
ready. early detection
of defects.
14. Types Mainly, there are The types of
three types of white box
black-box te st i n g a re –
t e s t i n g : Path testing,
functional L o o p
testing, Non- testing, and
Functional Condition
testing, and testing.
Regression
testing.
15. Errors It does not find In white-box
the errors testing, there is
related to the the detection of
code. hidden errors. It
also helps to
optimize the
code.
_____________________________________________________________________
______________________________________________
SRS⸺>>>>>>
. SRS stands for “Software Requirement [Link] is a document
prepared by business analyst and system analyst.
. It describes what will be the features of software and what will be its
behaviour and how it will perform.
. It is the detailed description of software system to be developed with it’s
functional and non-functional requirement
. SRS document is actually an agreements between client and developer.
Characteristics of SRS document :
. Complete —>>> The SRS document must be complete by taking all the
requirement to related to software development.
. Consistent—>>> It should be consistent from beginning to end , so that
users can easily understand the requirement.
. Feasible—>>> All the requirements included in SRS document must be
feasible to implement.
. Unambiguous—>>> The module of the software must be unambiguous so
one module is compatible with another module.
_____________________________________________________________________
_____________________________________________
Types of Feasibility Study in Software Project
Development
Feasibility Study in Software Engineering is a study to evaluate feasibility of
proposed project or system. Feasibility study is one of stage among important
four stages of Software Project Management Process. As name suggests
feasibility study is the feasibility analysis or it is a measure of the software
product in terms of how much beneficial product development will be for the
organization in a practical point of view. Feasibility study is carried out based
on many purposes to analyze whether software product will be right in terms of
development, implantation, contribution of project to the organization etc.
Types of Feasibility Study
The feasibility study mainly concentrates on below five mentioned areas.
. Technical Feasibility: In Technical Feasibility current resources both
hardware software along with required technology are analyzed/assessed
to develop project. This technical feasibility study gives report whether
there exists correct required resources and technologies which will be used
for project development. Along with this, feasibility study also analyzes
technical skills and capabilities of technical team, existing technology can
be used or not, maintenance and up-gradation is easy or not for chosen
technology etc.
. Operational Feasibility:
. Economic Feasibility: In Economic Feasibility study cost and benefit of the
project is analyzed. Means under this feasibility study a detail analysis is
carried out what will be cost of the project for development which includes
all required cost for final development like hardware and software resource
required, design and development cost and operational cost and so on.
After that it is analyzed whether project will be beneficial in terms of
finance for organization or not.
. Legal Feasibility: In Legal Feasibility study project is analyzed in legality
point of view. This includes analyzing barriers of legal implementation of
project, data protection acts or social media laws, project certificate,
license, copyright etc. Overall it can be said that Legal Feasibility Study is
study to know if proposed project conform legal and ethical requirements.
. Schedule Feasibility: In Schedule Feasibility Study mainly timelines/
deadlines is analyzed for proposed project which includes how many times
teams will take to complete final project which has a great impact on the
organization as purpose of project may fail if it can’t be completed on time.
. Cultural and Political Feasibility: This section assesses how the software
project will affect the political environment and organizational culture. This
analysis takes into account the organization’s culture and how the
suggested changes could be received there, as well as any potential
political obstacles or internal opposition to the project. It is essential that
cultural and political factors be taken into account in order to execute
projects successfully.
. Market Feasibility: This refers to evaluating the market’s willingness and
ability to accept the suggested software system. Analyzing the target
market, understanding consumer wants and assessing possible rivals are all
part of this study. It assists in identifying whether the project is in line with
market expectations and whether there is a feasible market for the good or
service being offered.
. Resource Feasibility: This method evaluates if the resources needed to
complete the software project successfully are adequate and readily
available. Financial, technological and human resources are all taken into
account in this study. It guarantees that sufficient hardware, software,
trained labor and funding are available to complete the project successfully.
Importance:
Feasibility study is so important stage of Software Project Management
Process as after completion of feasibility study it gives a conclusion of whether
to go ahead with proposed project as it is practically feasible or to stop
proposed project here as it is not right/feasible to develop or to think/analyze
about proposed project again.
_____________________________________________________________________
___________________________________________
SDLC-Software Development Life Cycle
RAD-Rapid application development
COCOMO-Constructive Cost Model
CMM-capability Maturity Model
UML-Unified Modelling Language
KPA-Key Process Areas
_____________________________________________________________________
___________________________________________
Computer Based Information System (CBIS) is an information system in
which the computer plays a major role. Such a system consists of the following
elements⸺>
● Hardware: The term hardware refers to machinery. This category includes
the computer itself, which is often referred to as the central processing unit
(CPU), and all of its support equipment’s. Among the support equipment’s
are input and output devices, storage devices and communications
devices.
● Software: The term software refers to computer programs and the manuals
(if any) that support them. Computer programs are machine-readable
instructions that direct the circuitry within the hardware parts of
the Computer Based Information System (CBIS) to function in ways that
produce useful information from data. Programs are generally stored on
some input / output medium-often a disk or tape.
● Data: Data are facts that are used by program to produce useful
information. Like programs, data are generally stored in machine-readable
from on disk or tape until the computer needs them.
● Procedures: Procedures are the policies that govern the operation of
a computer system. “Procedures are to people what software is to
hardware” is a common analogy that is used to illustrate the role of
procedures in a CBIS.
● People: Every Computer Based Information System (CBIS) needs people if
it is to be useful. Often the most over-looked element of the CBIS is the
people: probably the components that most influence the success or failure
●
of information system.
. Transaction Processing Systems (TPS) :
The main computer system in any organization deals with handling business
transactions. A transaction processing system is a computer system that
collects, organizes, stores, updates, and retrieves transaction data. This data is
crucial for keeping records and feeding into other computer systems used by
the organization. These systems are designed to make routine business tasks
easier and more efficient. A transaction can be any action that affects the
organization as a whole, like placing orders, billing customers, hiring
employees, or depositing checks. Different organizations deal with different
types of transactions. However, all organizations rely heavily on processing
transactions as a key part of their daily operations. The most successful
organizations handle transaction processing in a very organized manner.
Transaction processing systems ensure fast and accurate handling of
transactions and can be programmed to follow specific routines consistently.
Example—>>>
● ATMs: Use specialized computer programs to handle bank transactions
● Order processing: Collect orders from clients manually or through mail and
phone calls
● Hotel reservation systems: Use real-time processing to allocate rooms,
accept customer details, accept payment, and update hotel records
2. Management Information System (MIS):
Computers are highly effective at processing data for several reasons. One
major reason is their ability to quickly handle large amounts of data related to
accounts and transactions. In the past, computers were mostly used for tasks
like keeping records and automating routine office work. However, in recent
years, there has been a growing focus on using computers to provide
information for decision-making, planning, and control purposes in
management. Management Information Systems (MIS) are particularly
concerned with supporting management functions. MIS is a type of information
system that supplies managers at all levels with relevant, timely, accurate,
complete, concise, and economically feasible information necessary for smooth
business operations.
Example—>>>
● Real-time performance reports
● Analytical reports
● Improved internal communication
3. Decision Support Systems (DSS) :
It is an information system that offers the kind of information that may not be
predictable, the kind that business professionals may need only once. These
systems do not produce regularly scheduled management reports. Instead,
they are designed to respond to a wide range of requests. It is true that all the
decisions in an organisation are not of a recurring nature. Decision support
systems assist managers who must make decisions that are not highly
structured, often called unstructured or semi-structured decisions. A decision
is considered unstructured if there are no clear procedures for making the
decision and if not all the factors to be considered in the decision can be
readily identified in advance. Judgement of the manager plays a vital role in
decision making where the problem is not structured. The decision support
system supports, but does not replace, judgement of manager.
Example—>>>
● ERP dashboards: Visualizes changes in production, monitors business
performance, and identifies areas for improvement
● Clinical decision support system: Uses advanced decision-making
algorithms to help physicians make medical decisions
4. Office Automation Systems (OAS):
Office automation systems are among the newest and most rapidly expanding
computer based information systems. They are being developed with the hopes
and expectations that they will increase the efficiency and productivity of office
workers-typists, secretaries, administrative assistants, staff professionals,
managers and the like. Many organisations have taken the First step toward
automating their offices. Often this step involves the use of word processing
equipment to facilitate the typing, storing, revising and printing of textual
materials. Another development is a computer based communications system
such as electronic mail which allows people to communicate in an electronic
mode through computer terminals. An office automation system can be
described as a multi-function, integrated computer based system that allows
many office activities to be performed in an electronic mode.
Example—>>>
● Voice mail
● Fax (facsimile)
● Videoconferencing
[Link]-Based Systems (KBS) :
Knowledge-based systems (KBS) are computer programs that use a centralized
repository of data known as a knowledge base to provide a method for
problem-solving. Knowledge-based systems are a form of artificial intelligence
(AI) designed to capture the knowledge of human experts to support decision-
making. An expert system is an example of a knowledge-based system because
it relies on human expertise.
KBSes can assist in decision-making, human
learning and creating a companywide knowledge-sharing platform, for
example. KBS can be used as a broad term, but these programs are generally
distinguished by representing knowledge as a reasoning system to derive new
knowledge.
A basic KBS works using a knowledge base and
an interface engine. The knowledge base is a repository of data that contains a
collection of information in a given field -- such as medical data. The inference
engine processes and locates data based on requests, similar to a search
engine. A reasoning system is used to draw conclusions from data provided and
make decisions based on if-then rules, logic programming or constraint
handling rules. Users interact with the system through a user interface.
What are knowledge-based systems used for?
Knowledge-based systems are commonly used to aid in solving complex
problems and to support human learning. KBSes have been developed for
numerous applications. For example, an early knowledge-based system, Mycin,
was created to help doctors diagnose diseases. Healthcare has remained an
important market for knowledge-based systems, which are now referred to
as clinical decision support systems in the health sciences context.
_____________________________________________________________________
___________________________________________
Software Development Life Cycle (SDLC)
Software development life cycle (SDLC) is a structured process that is used to
design, develop, and test good-quality software. SDLC, or software
development life cycle, is a methodology that defines the entire procedure of
software development step-by-step.
The goal of the SDLC life cycle model is to deliver high-
quality, maintainable software that meets the user’s requirements. SDLC in
software engineering models outlines the plan for each stage so that each
stage of the software development model can perform its task efficiently to
deliver the software at a low cost within a given time frame that meets users’
requirements.
Stage-1: Planning and Requirement Analysis :
Planning is a crucial step in everything, just as in software development. In this
same stage, requirement analysis is also performed by the developers of the
organization. This is attained from customer inputs, and sales department/
market surveys.
Stage-2: Defining Requirements :
In this stage, all the requirements for the target software are specified. These
requirements get approval from customers, market analysts, and stakeholders.
Stage-3: Designing Architecture :
SRS is a reference for software designers to come up with the best architecture
for the software. Hence, with the requirements defined in SRS, multiple designs
for the product architecture are present in the Design Document Specification
(DDS).
Stage-4: Developing Product :
At this stage, the fundamental development of the product starts. For this,
developers use a specific programming code as per the design in the DDS.
Hence, it is important for the coders to follow the protocols set by the
association. Conventional programming tools like compilers, interpreters,
debuggers, etc. are also put into use at this stage. Some popular languages like
C/C++, Python, Java, etc. are put into use as per the software regulations.
Stage-5: Product Testing and Integration :
After the development of the product, testing of the software is necessary to
ensure its smooth execution. Although, minimal testing is conducted at every
stage of SDLC. Therefore, at this stage, all the probable flaws are tracked,
fixed, and retested. This ensures that the product confronts the quality
requirements of SRS.
Stage-6: Deployment and Maintenance of Products :
After detailed testing, the conclusive product is released in phases as per the
organization’s strategy. Then it is tested in a real industrial environment. It is
important to ensure its smooth performance. If it performs well, the
organization sends out the product as a whole. After retrieving beneficial
feedback, the company releases it as it is or with auxiliary improvements to
make it further helpful for the customers. However, this alone is not enough.
Therefore, along with the deployment, the product’s supervision.
_____________________________________________________________________
__________________________________________
SDLC - Waterfall Model :
The Waterfall Model was the first Process Model to be introduced. It is also
referred to as a linear-sequential life cycle model. It is very simple to
understand and use. In a waterfall model, each phase must be completed
before the next phase can begin and there is no overlapping in the phases.
● Requirement Gathering and analysis − All possible requirements of the
system to be developed are captured in this phase and documented in a
requirement specification document.
● System Design − The requirement specifications from first phase are
studied in this phase and the system design is prepared. This system
design helps in specifying hardware and system requirements and helps in
defining the overall system architecture.
● Implementation − With inputs from the system design, the system is first
developed in small programs called units, which are integrated in the next
phase. Each unit is developed and tested for its functionality, which is
referred to as Unit Testing.
● Integration and Testing − All the units developed in the implementation
phase are integrated into a system after testing of each unit. Post
integration the entire system is tested for any faults and failures.
● Deployment of system − Once the functional and non-functional testing is
done; the product is deployed in the customer environment or released into
the market.
● Maintenance − There are some issues which come up in the client
environment. To fix those issues, patches are released. Also to enhance the
product some better versions are released. Maintenance is done to deliver
these changes in the customer environment.
Waterfall Model - Advantages :
● Simple and easy to understand and use
● Easy to manage due to the rigidity of the model. Each phase has specific
deliverables and a review process.
● Phases are processed and completed one at a time.
● Works well for smaller projects where requirements are very well
understood.
● Clearly defined stages.
● Well understood milestones.
● Easy to arrange tasks.
● Process and results are well documented.
Waterfall Model - Disadvantages :
● No working software is produced until late during the life cycle.
● High amounts of risk and uncertainty.
● Not a good model for complex and object-oriented projects.
● Poor model for long and ongoing projects.
● Not suitable for the projects where requirements are at a moderate to high
risk of changing. So, risk and uncertainty is high with this process model.
● It is difficult to measure progress within stages.
● Cannot accommodate changing requirements.
● Adjusting scope during the life cycle can end a project.
● Integration is done as a "big-bang. at the very end, which doesn't allow
identifying any technological or business bottleneck or challenges early.
_____________________________________________________________________
__________________________________________
Prototyping Model – Software Engineering :
Prototyping is defined as the process of developing a working replication of a
product or system that has to be engineered. It offers a small-scale facsimile of
the end product and is used for obtaining customer [Link] model is
used when the customers do not know the exact project requirements
beforehand.
Steps of Prototyping Model :
Step 1: Requirement Gathering and Analysis: This is the initial step in
designing a prototype model. In this phase, users are asked about what they
expect or what they want from the system.
Step 2: Quick Design: This is the second step in the Prototyping Model. This
model covers the basic design of the requirement through which a quick
overview can be easily described.
Step 3: Build a Prototype: This step helps in building an actual prototype from
the knowledge gained from prototype design.
Step 4: Initial User Evaluation: This step describes the preliminary testing
where the investigation of the performance model occurs, as the customer will
tell the strengths and weaknesses of the design, which was sent to the
developer.
Step 5: Refining Prototype: If any feedback is given by the user, then
improving the client’s response to feedback and suggestions, the final system
is approved.
Step 6: Implement Product and Maintain: This is the final step in the phase of
the Prototyping Model where the final system is tested and distributed to
production, here the program is run regularly to prevent failures.
Advantages of using Prototype Model :
. This model is flexible in design.
. It is easy to detect errors.
. We can find missing functionality easily.
. There is scope of refinement, it means new requirements can be easily
accommodated.
. It can be reused by the developer for more complicated projects in the
future.
. It ensures a greater level of customer satisfaction and comfort.
. It is ideal for online system.
. It helps developers and users both understand the system better.
. Integration requirements are very well understood and deployment
channels are decided at a very early stage.
. It can actively involve users in the development phase.
Disadvantages of using Prototype Model :
. This model is costly.
. It has poor documentation because of continuously changing customer
requirements.
. There may be too much variation in requirements.
. Customers sometimes demand the actual product to be delivered soon
after seeing an early prototype.
. There may be sub-optimal solutions because of developers in a hurry to
build prototypes.
. Customers may not be satisfied or interested in the product after seeing
the initial prototype.
. There is certainty in determining the number of iterations.
. There may be incomplete or inadequate problem analysis.
. There may increase the complexity of the system.
_____________________________________________________________________
__________________________________________
Evolutionary Model – Software Engineering :
The evolutionary model combines aspects of the iterative and incremental
models of software development. Instead of releasing the entire system all at
once, it's delivered gradually over time, often in smaller pieces. This approach
allows for adjustments based on user feedback and other factors. Initially, some
basic planning is needed to outline requirements and architecture. This model
works well for software products that may need changes to their features
during development. This article dives into the details of the Evolutionary
Model.
What is the Evolutionary Model?
The Evolutionary development model divides the development cycle into
smaller, incremental waterfall models in which users can get access to the
product at the end of each cycle.
. Feedback is provided by the users on the product for the planning stage of
the next cycle and the development team responds, often by changing the
product, plan, or process.
. Therefore, the software product evolves with time.
. All the models have the disadvantage that the duration of time from the
start of the project to the delivery time of a solution is very high.
. The evolutionary model solves this problem with a different approach.
. The evolutionary model suggests breaking down work into smaller chunks,
prioritizing them, and then delivering those chunks to the customer one by
one.
. The number of chunks is huge and is the number of deliveries made to the
customer.
. The main advantage is that the customer’s confidence increases as he
constantly gets quantifiable goods or services from the beginning of the
project to verify and validate his requirements.
. The model allows for changing requirements as well as all work is broken
down into maintainable work chunks.
Necessary Conditions for Implementing this Model :
. Customer needs are clear and been explained in deep to the developer
team.
. There might be small changes required in separate parts but not a major
change.
. As it requires time, so there must be some time left for the market
constraints.
. Risk is high and continuous targets to achieve and report to customer
repeatedly.
. It is used when working on a technology is new and requires time to learn.
Advantages Evolutionary Model—->>
. Adaptability to Changing Requirements: Evolutionary models work
effectively in projects when the requirements are ambiguous or change
often. They support adjustments and flexibility along the course of
development.
. Early and Gradual Distribution: Functional components or prototypes can
be delivered early thanks to incremental development. Faster user
satisfaction and feedback may result from this.
. User Commentary and Involvement: Evolutionary models place a strong
emphasis on ongoing user input and participation. This guarantees that the
software offered closely matches the needs and expectations of the user.
. Improved Handling of Difficult Projects: Big, complex tasks can be
effectively managed with the help of evolutionary models. The development
process is made simpler by segmenting the project into smaller, easier-to-
.
manage portions.
Disadvantages Evolutionary Model—>>>
. Communication Difficulties: Evolutionary models require constant
cooperation and communication. The strategy may be less effective if there
are gaps in communication or if team members are spread out
geographically.
. Dependence on an Expert Group: A knowledgeable and experienced
group that can quickly adjust to changes is needed for evolutionary models.
Teams lacking experience may find it difficult to handle these
model’s dynamic nature.
. Increasing Management Complexity: Complexity can be introduced by
organizing and managing several increments or iterations, particularly in
large projects. In order to guarantee integration and synchronization, good
project management is needed.
. Greater Initial Expenditure: As evolutionary models necessitate continual
testing, user feedback and prototyping, they may come with a greater
starting cost. This may be a problem for projects that have limited funding.
_____________________________________________________________________
__________________________________________
RAD (Rapid Application Development) Model—>>>>
RAD is a linear sequential software development process model that
emphasizes a concise development cycle using an element based construction
approach. If the requirements are well understood and described, and the
project scope is a constraint, the RAD process enables a development team to
create a fully functional system within a concise time period.
RAD (Rapid Application Development) is a concept that products can be
developed faster and of higher quality through:
● Gathering requirements using workshops or focus groups
● Prototyping and early, reiterative user testing of designs
● The re-use of software components
● A rigidly paced schedule that refers design improvements to the next
product version
● Less formality in reviews and other team communication
The various phases of RAD are as follows:
[Link] Modelling: The information flow among business functions is
defined by answering questions like what data drives the business process,
what data is generated, who generates it, where does the information go, who
process it and so on.
2. Data Modelling: The data collected from business modeling is refined into a
set of data objects (entities) that are needed to support the business. The
attributes (character of each entity) are identified, and the relation between
these data objects (entities) is defined.
[Link] Modelling: The information object defined in the data modeling
phase are transformed to achieve the data flow necessary to implement a
business function. Processing descriptions are created for adding, modifying,
deleting, or retrieving a data object.
4. Application Generation: Automated tools are used to facilitate construction
of the software; even they use the 4th GL techniques.
5. Testing & Turnover: Many of the programming components have already
been tested since RAD emphasis reuse. This reduces the overall testing time.
But the new part must be tested, and all interfaces must be fully exercised.
When to use RAD Model?
● When the system should need to create the project that modularizes in a
short span time (2-3 months).
● When the requirements are well-known.
● When the technical risk is limited.
● When there's a necessity to make a system, which modularized in 2-3
months of period.
● It should be used only if the budget allows the use of automatic code
generating tools.
Advantage of RAD Model
● This model is flexible for change.
● In this model, changes are adoptable.
● Each phase in RAD brings highest priority functionality to the customer.
● It reduced development time.
● It increases the reusability of features.
Disadvantage of RAD Model
● It required highly skilled designers.
● All application is not compatible with RAD.
● For smaller projects, we cannot use the RAD model.
● On the high technical risk, it's not suitable.
● Required user involvement.
_____________________________________________________________________
__________________________________________
What is the Cocomo Model?
The Cocomo Model is a procedural cost estimate model for software projects
and is often used as a process of reliably predicting the various parameters
associated with making a project such as size, effort, cost, time, and quality. It
was proposed by Barry Boehm in 1981 and is based on the study of 63 projects,
which makes it one of the best-documented models.
The Six phases of detailed COCOMO are:
. Planning and requirements
. System design
. Detailed design
. Module code and test
. Integration and test
. Cost Constructive model
The Cocomo model divides software projects into 3 types-
● Organic
● Semi-detached
● Embedded
ORGANIC – A software development project comes under organic type if the
development team is small, the problem is not complex, and has been solved
before. Also, a nominal experience of team members is considered. Some of
the projects that can be organic are- small data processing systems, basic
inventory management systems, business systems.
SEMIDETACHED – A software project that is in-between the organic and
embedded system is a semi-detached system. Its characteristics include – a
middle-sized development team, with a mix of both- experienced and
inexperienced members, and the project complexity is neither too high nor too
low.
The
problems faced in this project are a few known and a few unknown, which have
never been faced before. Few such projects are- Database Management
System(DBMS), new unknown operating system, difficult inventory
management system.
EMBEDDED – A project requiring a large team with experienced members, and
having the highest level of complexity is an embedded system. The members
need to be creative enough to be able to develop complex models. Such
projects include- air traffic models, ATMs, complex banking systems, etc.
Types of COCOMO Model
The different types of COCOMO models define the depth of cost estimation is
required for the project. It depends on the software manager, what type of
model do they choose.
According to Boehm, the estimation should be divided into 3 stages-
● Basic Model
● Intermediate Model
● Detailed Model
Basic Model – This model is based on rough calculations thus, there is very
limited accuracy. The whole model is based on only lines of source code to
estimate the calculation and other factors are neglected.
Intermediate Model – The intermediate model dives deeper and includes more
factors such as cost drivers into the calculation. This enhances the accuracy of
estimation. The cost driver includes attributes like reliability, capability, and
experience.
Detailed Model – The detailed model or the complete model includes all the
factors of the both-the basic model and the intermediate model. In the detailed
model for each cost driver property, various effort multipliers are used.
Importance of the COCOMO Model⸺>>>
. Cost Estimation: To help with resource planning and project budgeting,
COCOMO offers a methodical approach to software development cost
estimation.
. Resource Management: By taking team experience, project size, and
complexity into account, the model helps with efficient resource allocation.
. Project Planning: COCOMO assists in developing practical project plans
that include attainable objectives, due dates, and benchmarks.
. Risk management: Early in the development process, COCOMO assists in
.
identifying and mitigating potential hazards by including risk elements.
. Support for Decisions: During project planning, the model provides a
quantitative foundation for choices about scope, priorities, and resource
allocation.
. Benchmarking: To compare and assess various software development
projects to industry standards, COCOMO offers a benchmark.
. Resource Optimization: The model helps to maximize the use of
resources, which raises productivity and lowers costs.
_____________________________________________________________________
__________________________________________
System analysis
System analysis in software engineering refers to the process of studying,
understanding, and defining the specifications and requirements of a software
system to address a particular problem or fulfill certain objectives. It involves
analyzing the existing system (if any), identifying user needs and expectations,
and designing a solution that meets those requirements effectively and
efficiently.
Here's a breakdown of what system analysis entails:
● Understanding the Problem Domain: System analysts begin by gaining a
comprehensive understanding of the problem domain for which the
software system is being developed. This involves studying the business
processes, workflows, and requirements of the organization or stakeholders
involved.
● Gathering Requirements: System analysts gather requirements by
conducting interviews with stakeholders, observing current processes, and
collecting documentation. Requirements can include functional
requirements (what the system should do) and non-functional requirements
(constraints on the system's operation, such as performance, security, and
usability).
● Analyzing Requirements: Once requirements are gathered, system
analysts analyze and prioritize them to identify the most critical and
essential features and functionalities that the system must deliver. They
may use techniques like requirement prioritization, requirements
traceability, and conflict resolution to ensure that the requirements are
consistent and feasible.
● Modeling System Behavior: System analysts create models to represent
the behavior and structure of the software system. This can include data
flow diagrams (DFDs), entity-relationship diagrams (ERDs), use case
diagrams, and sequence diagrams. These models help stakeholders
visualize the system's functionality and interactions.
● Evaluating Alternatives: System analysts evaluate alternative solutions
and technologies to determine the best approach for implementing the
system. This may involve conducting cost-benefit analysis, feasibility
studies, and risk assessments to identify potential risks and benefits
associated with each option.
● Designing System Architecture: Based on the requirements and analysis,
system analysts design the architecture of the software system, including
the high-level structure of components, modules, and interfaces. They
define how the system will be organized and how different components will
interact with each other to achieve the desired functionality.
● Documenting Specifications: Throughout the analysis process, system
analysts document the specifications, requirements, and design decisions
in detail. This documentation serves as a reference for developers, testers,
and other stakeholders involved in the software development lifecycle.
● Communication and Collaboration: System analysis involves active
communication and collaboration with stakeholders, including users,
developers, project managers, and other members of the development
team. Effective communication ensures that everyone has a clear
understanding of the requirements and objectives of the software system.
Overall, system analysis is a critical phase in the software engineering process,
laying the foundation for the successful development and implementation of
software systems that meet the needs of users and stakeholders. It involves a
systematic and rigorous approach to understanding, analyzing, and defining the
requirements and specifications of the system before proceeding to the design
and implementation phases.
_____________________________________________________________________
__________________________________________
What is DFD(Data Flow Diagram)?
DFD is the abbreviation for Data Flow Diagram. The flow of data of a system or
a process is represented by DFD. It also gives insight into the inputs and
outputs of each entity and the process itself. DFD does not have control flow
and no loops or decision rules are present. Specific operations depending on
the type of data can be explained by a flowchart. It is a graphical tool, useful for
communicating with users ,managers and other personnel. it is useful for
analyzing existing as well as proposed system.
It should be
pointed out that a DFD is not a flowchart. In drawing the DFD, the designer has
to specify the major transforms in the path of the data flowing from the input to
the output. DFDs can be hierarchically organized, which helps in progressively
partitioning and analyzing large systems.
It provides an overview of
● What data is system processes.
● What transformation are performed.
● What data are stored.
● What results are produced , etc.
Data Flow Diagram can be represented in several ways. The DFD belongs to
structured-analysis modeling tools. Data Flow diagrams are very popular
because they help us to visualize the major steps and data involved in
software-system processes.
Characteristics of DFD
● DFDs are commonly used during problem analysis.
● DFDs are quite general and are not limited to problem analysis for software
requirements specification.
● DFDs are very useful in understanding a system and can be effectively used
during analysis.
● It views a system as a function that transforms the inputs into desired
outputs.
● The DFD aims to capture the transformations that take place within a
system to the input data so that eventually the output data is produced.
● The processes are shown by named circles and data flows are represented
by named arrows entering or leaving the bubbles.
● A rectangle represents a source or sink and it is a net originator or
consumer of data. A source sink is typically outside the main system of
study.
Components of DFD
The Data Flow Diagram has 4 components:
● Process : Input to output transformation in a system takes place because
of process function. The symbols of a process are rectangular with rounded
corners, oval, rectangle or a circle. The process is named a short sentence,
in one word or a phrase to express its essence
● Data Flow : Data flow describes the information transferring between
different parts of the systems. The arrow symbol is the symbol of data flow.
A relatable name should be given to the flow to determine the information
which is being moved. Data flow also represents material along with
information that is being moved. Material shifts are modeled in systems that
are not merely informative. A given flow should only transfer a single type
of information. The direction of flow is represented by the arrow which can
also be bi-directional.
● Warehouse : The data is stored in the warehouse for later use. Two
horizontal lines represent the symbol of the store. The warehouse is simply
not restricted to being a data file rather it can be anything like a folder with
documents, an optical disc, a filing cabinet. The data warehouse can be
viewed independent of its implementation. When the data flow from the
warehouse it is considered as data reading and when data flows to the
warehouse it is called data entry or data updating.
● Terminator : The Terminator is an external entity that stands outside of the
system and communicates with the system. It can be, for example,
organizations like banks, groups of people like customers or different
departments of the same organization, which is not a part of the model
system and is an external entity. Modeled systems also communicate with
terminator.
Advantages of DFD
● It helps us to understand the functioning and the limits of a system.
● It is a graphical representation which is very easy to understand as it helps
visualize contents.
● Data Flow Diagram represent detailed and well explained diagram of system
components.
● It is used as the part of system documentation file.
● Data Flow Diagrams can be understood by both technical or nontechnical
person because they are very easy to understand.
Disadvantages of DFD
● At times DFD can confuse the programmers regarding the system.
● Data Flow Diagram takes long time to be generated, and many times due to
this reasons analysts are denied permission to work on it.
_____________________________________________________________________
__________________________________________
Entity-Relationship Diagrams
ER-modeling is a data modeling method used in software engineering to
produce a conceptual data model of an information system. Diagrams created
using this ER-modeling method are called Entity-Relationship Diagrams or ER
diagrams or ERDs.
Purpose of ERD
● The database analyst gains a better understanding of the data to be
contained in the database through the step of constructing the ERD.
● The ERD serves as a documentation tool.
● Finally, the ERD is used to connect the logical structure of the database to
users. In particular, the ERD effectively communicates the logic of the
database to users.
Components of an ER Diagrams
1. Entity
An entity can be a real-world object, either animate or inanimate, that can be
merely identifiable. An entity is denoted as a rectangle in an ER diagram. For
example, in a school database, students, teachers, classes, and courses
offered can be treated as entities. All these entities have some attributes or
properties that give them their identity.
Entity Set
An entity set is a collection of related types of entities. An entity set may
include entities with attribute sharing similar values. For example, a Student set
may contain all the students of a school; likewise, a Teacher set may include all
the teachers of a school from all faculties. Entity set need not be disjoint.
2. Attributes
Entities are denoted utilizing their properties, known as attributes. All attributes
have values. For example, a student entity may have name, class, and age as
attributes.
There exists a domain or range of values that can be assigned to attributes. For
example, a student's name cannot be a numeric value. It has to be alphabetic. A
student's age cannot be negative, etc.
There are four types of Attributes:
. Key attribute
. Composite attribute
. Single-valued attribute
. Multi-valued attribute
. Derived attribute
1. Key attribute: Key is an attribute or collection of attributes that uniquely
identifies an entity among the entity set. For example, the roll_number of a
student makes him identifiable among students.
There are mainly three types of keys:
. Super key: A set of attributes that collectively identifies an entity in the
entity set.
. Candidate key: A minimal super key is known as a candidate key. An entity
set may have more than one candidate key.
. Primary key: A primary key is one of the candidate keys chosen by the
database designer to uniquely identify the entity set.
2. Composite attribute: An attribute that is a combination of other attributes is
called a composite attribute. For example, In student entity, the student
address is a composite attribute as an address is composed of other
characteristics such as pin code, state, country.
3. Single-valued attribute: Single-valued attribute contain a single value. For
example, Social_Security_Number.
4. Multi-valued Attribute: If an attribute can have more than one value, it is
known as a multi-valued attribute. Multi-valued attributes are depicted by the
double ellipse. For example, a person can have more than one phone number,
email-address, etc.
5. Derived attribute: Derived attributes are the attribute that does not exist in
the physical database, but their values are derived from other attributes
present in the database. For example, age can be derived from date_of_birth. In
the ER diagram, Derived attributes are depicted by the dashed ellipse.
3. Relationships
The association among entities is known as relationship. Relationships are
represented by the diamond-shaped box. For example, an employee works_at a
department, a student enrolls in a course. Here, Works_at and Enrolls are called
relationships.
Relationship set
A set of relationships of a similar type is known as a relationship set. Like
entities, a relationship too can have attributes. These attributes are called
descriptive attributes.
Degree of a relationship set
The number of participating entities in a relationship describes the degree of
the relationship. The three most common relationships in E-R models are:
. Unary (degree1)
. Binary (degree2)
. Ternary (degree3)
1. Unary relationship: This is also called recursive relationships. It is a
relationship between the instances of one entity type. For example, one person
is married to only one person.
2. Binary relationship: It is a relationship between the instances of two entity
types. For example, the Teacher teaches the subject.
3. Ternary relationship: It is a relationship amongst instances of three entity
types. In fig, the relationships "may have" provide the association of three
entities, i.e., TEACHER, STUDENT, and SUBJECT. All three entities are many-to-
many participants. There may be one or many participants in a ternary
relationship.
In general, "n" entities can be related by the same relationship and is known
as n-ary relationship.
Cardinality
Cardinality describes the number of entities in one entity set, which can be
associated with the number of entities of other sets via relationship set.
Types of Cardinalities
1. One to One: One entity from entity set A can be contained with at most one
entity of entity set B and vice versa. Let us assume that each student has only
one student ID, and each student ID is assigned to only one person. So, the
relationship will be one to one.
Using Sets, it can be represented as:
2. One to many: When a single instance of an entity is associated with more
than one instances of another entity then it is called one to many relationships.
For example, a client can place many orders; a order cannot be placed by many
customers.
Using Sets, it can be represented as:
3. Many to One: More than one entity from entity set A can be associated with
at most one entity of entity set B, however an entity from entity set B can be
associated with more than one entity from entity set A. For example - many
students can study in a single college, but a student cannot study in many
colleges at the same time.
Using Sets, it can be represented as:
4. Many to Many: One entity from A can be associated with more than one
entity from B and vice-versa. For example, the student can be assigned to
many projects, and a project can be assigned to many students.
Using Sets, it can be represented as:
_____________________________________________________________________
__________________________________________
What is Structured Analysis?
Structured Analysis is a development method that allows the analyst to
understand the system and its activities in a logical way.
It is a systematic approach, which uses graphical tools that analyze and refine
the objectives of an existing system and develop a new system specification
which can be easily understandable by user.
It has following attributes −
● It is graphic which specifies the presentation of application.
● It divides the processes so that it gives a clear picture of system flow.
● It is logical rather than physical i.e., the elements of system do not depend
on vendor or hardware.
● It is an approach that works from high-level overviews to lower-level details.
Structured Analysis Tools
During Structured Analysis, various tools and techniques are used for system
development. They are −
● Data Flow Diagrams
● Data Dictionary
● Decision Trees
● Decision Tables
● Structured English
● Pseudocode
Data Flow Diagrams (DFD) or Bubble Chart
It is a technique developed by Larry Constantine to express the requirements of
system in a graphical form.
● It shows the flow of data between various functions of system and specifies
how the current system is implemented.
● It is an initial stage of design phase that functionally divides the
requirement specifications down to the lowest level of detail.
● Its graphical nature makes it a good communication tool between user and
analyst or analyst and system designer.
● It gives an overview of what data a system processes, what transformations
are performed, what data are stored, what results are produced and where
they flow.
Basic Elements of DFD
DFD is easy to understand and quite effective when the required design is not
clear and the user wants a notational language for communication. However, it
requires a large number of iterations for obtaining the most accurate and
complete solution.
The following table shows the symbols used in designing a DFD and their
significance −
Symbol Name Symbol Meaning
Square Source or
Destination of Data
Arrow Data flow
Circle Process
transforming data
flow
Open Rectangle Data Store
Square Source or
Destination of Data
Arrow Data flow
Circle Process
transforming data
flow
Open Rectangle Data Store
Types of DFD
DFDs are of two types: Physical DFD and Logical DFD. The following table lists
the points that differentiate a physical DFD from a logical DFD.
Physical DFD Logical DFD
It is implementation dependent. It It is implementation independent. It
shows which functions are focuses only on the flow of data
performed. between processes.
It provides low level details of It explains events of systems and
hardware, software, files, and people. data required by each event.
It depicts how the current system It shows how business operates; not
operates and how a system will be how the system can be implemented.
implemented.
Data Dictionary
A data dictionary is a structured repository of data elements in the system. It
stores the descriptions of all DFD data elements that is, details and definitions
of data flows, data stores, data stored in data stores, and the processes.
A data dictionary improves the communication between the analyst and the
user. It plays an important role in building a database. Most DBMSs have a data
dictionary as a standard feature. For example, refer the following table −
[Link]. Data Name Description No. of
Characters
1 ISBN ISBN Number 10
2 TITLE title 60
3 SUB Book Subjects 80
4 ANAME Author Name 15
Decision Trees
Decision trees are a method for defining complex relationships by describing
decisions and avoiding the problems in communication. A decision tree is a
diagram that shows alternative actions and conditions within horizontal tree
framework. Thus, it depicts which conditions to consider first, second, and so
on.
Decision trees depict the relationship of each condition and their permissible
actions. A square node indicates an action and a circle indicates a condition. It
forces analysts to consider the sequence of decisions and identifies the actual
decision that must be made.
The major limitation of a decision tree is that it lacks information in its
format to describe what other combinations of conditions you can take for
testing. It is a single representation of the relationships between
conditions and actions.
For example, refer the following decision tree −
Decision Tables
Decision tables are a method of describing the complex logical relationship in a
precise manner which is easily understandable.
● It is useful in situations where the resulting actions depend on the
occurrence of one or several combinations of independent conditions.
● It is a matrix containing row or columns for defining a problem and the
actions.
Components of a Decision Table
● Condition Stub − It is in the upper left quadrant which lists all the condition
to be checked.
● Action Stub − It is in the lower left quadrant which outlines all the action to
be carried out to meet such condition.
● Condition Entry − It is in upper right quadrant which provides answers to
questions asked in condition stub quadrant.
● Action Entry − It is in lower right quadrant which indicates the appropriate
action resulting from the answers to the conditions in the condition entry
quadrant.
The entries in decision table are given by Decision Rules which define the
relationships between combinations of conditions and courses of action. In
rules section,
● Y shows the existence of a condition.
● N represents the condition, which is not satisfied.
● A blank - against action states it is to be ignored.
● X (or a check mark will do) against action states it is to be carried out.
For example, refer the following table −
CONDITION Rule 1 Rule 2 Rule 3 Rule 4
S
Advance Y N N N
payment
made
Purchase - Y Y N
amount = Rs
10,000/-
Regular - Y N -
Customer
ACTIONS
Purchase - Y Y N
amount = Rs
10,000/-
Regular - Y N -
Customer
ACTIONS
Give 5% X X - -
discount
Give no - - X X
discount
Structured English
Structure English is derived from structured programming language which gives
more understandable and precise description of process. It is based on
procedural logic that uses construction and imperative sentences designed to
perform operation for action.
● It is best used when sequences and loops in a program must be considered
and the problem needs sequences of actions with decisions.
● It does not have strict syntax rule. It expresses all logic in terms of
sequential decision structures and iterations.
For example, see the following sequence of actions −
if customer pays advance
then
Give 5% Discount
else
if purchase amount >=10,000
then
if the customer is a regular customer
then Give 5% Discount
else No Discount
end if
else No Discount
end if
end if
Pseudocode
A pseudocode does not conform to any programming language and expresses
logic in plain English.
● It may specify the physical programming logic without actual coding during
and after the physical design.
● It is used in conjunction with structured programming.
● It replaces the flowcharts of a program.
Guidelines for Selecting Appropriate Tools
Use the following guidelines for selecting the most appropriate tool that would
suit your requirements −
● Use DFD at high or low level analysis for providing good system
documentations.
● Use data dictionary to simplify the structure for meeting the data
requirement of the system.
● Use structured English if there are many loops and actions are complex.
● Use decision tables when there are a large number of conditions to check
and logic is complex.
● Use decision trees when sequencing of conditions is important and if there
are few conditions to be tested.
_____________________________________________________________________
__________________________________________
Types of Feasibility Study in Software Project
Development
Feasibility Study in Software Engineering is a study to evaluate feasibility of
proposed project or system. Feasibility study is one of stage among important
four stages of Software Project Management Process. As name suggests
feasibility study is the feasibility analysis or it is a measure of the software
product in terms of how much beneficial product development will be for the
organization in a practical point of view. Feasibility study is carried out based
on many purposes to analyze whether software product will be right in terms of
development, implantation, contribution of project to the organization etc.
Types of Feasibility Study
The feasibility study mainly concentrates on below five mentioned areas.
Among these Economic Feasibility Study is most important part of the
feasibility analysis and Legal Feasibility Study is less considered feasibility
analysis.
. Technical Feasibility: In Technical Feasibility current resources both
hardware software along with required technology are analyzed/assessed
to develop project. This technical feasibility study gives report whether
there exists correct required resources and technologies which will be used
for project development. Along with this, feasibility study also analyzes
technical skills and capabilities of technical team, existing technology can
be used or not, maintenance and up-gradation is easy or not for chosen
technology etc.
. Operational Feasibility: In Operational Feasibility degree of providing
service to requirements is analyzed along with how much easy product will
be to operate and maintenance after deployment. Along with this other
operational scopes are determining usability of product, Determining
suggested solution by software development team is acceptable or not etc.
. Economic Feasibility: In Economic Feasibility study cost and benefit of the
project is analyzed. Means under this feasibility study a detail analysis is
carried out what will be cost of the project for development which includes
all required cost for final development like hardware and software resource
.
required, design and development cost and operational cost and so on.
After that it is analyzed whether project will be beneficial in terms of
finance for organization or not.
. Legal Feasibility: In Legal Feasibility study project is analyzed in legality
point of view. This includes analyzing barriers of legal implementation of
project, data protection acts or social media laws, project certificate,
license, copyright etc. Overall it can be said that Legal Feasibility Study is
study to know if proposed project conform legal and ethical requirements.
. Schedule Feasibility: In Schedule Feasibility Study mainly timelines/
deadlines is analyzed for proposed project which includes how many times
teams will take to complete final project which has a great impact on the
organization as purpose of project may fail if it can’t be completed on time.
. Cultural and Political Feasibility: This section assesses how the software
project will affect the political environment and organizational culture. This
analysis takes into account the organization’s culture and how the
suggested changes could be received there, as well as any potential
political obstacles or internal opposition to the project. It is essential that
cultural and political factors be taken into account in order to execute
projects successfully.
. Market Feasibility: This refers to evaluating the market’s willingness and
ability to accept the suggested software system. Analyzing the target
market, understanding consumer wants and assessing possible rivals are all
part of this study. It assists in identifying whether the project is in line with
market expectations and whether there is a feasible market for the good or
service being offered.
. Resource Feasibility: This method evaluates if the resources needed to
complete the software project successfully are adequate and readily
available. Financial, technological and human resources are all taken into
account in this study. It guarantees that sufficient hardware, software,
trained labor and funding are available to complete the project successfully.
_____________________________________________________________________
__________________________________________
User Interface Design – Software Engineering
The user interface is the front-end application view to which the user interacts
to use the software. The software becomes more popular if its user interface is:
. Attractive
. Simple to use
. Responsive in a short time
. Clear to understand
. Consistent on all interface screens
Types of User Interface
. Command Line Interface: The Command Line Interface provides a
command prompt, where the user types the command and feeds it to the
system. The user needs to remember the syntax of the command and its
use.
. Graphical User Interface: Graphical User Interface provides a simple
interactive interface to interact with the system. GUI can be a combination
of both hardware and software. Using GUI, the user interprets the software.
User Interface Design Process
The analysis and design process of a user interface is iterative and can be
represented by a spiral model. The analysis and design process of user
interface consists of four framework activities.
1. User, Task, Environmental Analysis, and Modeling
Initially, the focus is based on the profile of users who will interact with the
system, i.e., understanding, skill and knowledge, type of user, etc., based on
the user’s profile users are made into categories. From each category
requirements are gathered. Based on the requirement’s developer understand
how to develop the interface. Once all the requirements are gathered a detailed
analysis is conducted. In the analysis part, the tasks that the user performs to
establish the goals of the system are identified, described and elaborated. The
analysis of the user environment focuses on the physical work environment.
Among the questions to be asked are:
. Where will the interface be located physically?
. Where will the interface be located physically?
. Will the user be sitting, standing, or performing other tasks unrelated to the
interface?
. Does the interface hardware accommodate space, light, or noise
constraints?
. Are there special human factors considerations driven by environmental
factors?
2. Interface Design
The goal of this phase is to define the set of interface objects and actions i.e.,
control mechanisms that enable the user to perform desired tasks. Indicate
how these control mechanisms affect the system. Specify the action sequence
of tasks and subtasks, also called a user scenario. Indicate the state of the
system when the user performs a particular task. Always follow the three
golden rules stated by Theo Mandel. Design issues such as response time,
command and action structure, error handling, and help facilities are
considered as the design model is refined. This phase serves as the foundation
for the implementation phase.
3. Interface Construction and Implementation
The implementation activity begins with the creation of a prototype (model)
that enables usage scenarios to be evaluated. As iterative design process
continues a User Interface toolkit that allows the creation of windows, menus,
device interaction, error messages, commands, and many other elements of an
interactive environment can be used for completing the construction of an
interface.
4. Interface Validation
This phase focuses on testing the interface. The interface should be in such a
way that it should be able to perform tasks correctly, and it should be able to
handle a variety of tasks. It should achieve all the user’s requirements. It should
be easy to use and easy to learn. Users should accept the interface as a useful
one in their work.
_____________________________________________________________________
__________________________________________
Unified Modeling Language (UML) Diagrams
Unified Modeling Language (UML) is a general-purpose modeling language.
The main aim of UML is to define a standard way to visualize the way a system
has been designed. It is quite similar to blueprints used in other fields of
engineering. UML is not a programming language, it is rather a visual
language.
● We use UML diagrams to portray the behavior and structure of a system.
● UML helps software engineers, businessmen, and system architects with
modeling, design, and analysis.
Why do we need UML?
● Complex applications need collaboration and planning from multiple teams
and hence require a clear and concise way to communicate amongst them.
● Businessmen do not understand code. So UML becomes essential to
communicate with non-programmers about essential requirements,
functionalities, and processes of the system.
● A lot of time is saved down the line when teams can visualize processes,
user interactions, and the static structure of the system.
3. Structural UML Diagrams
3.1. Class Diagram
The most widely use UML diagram is the class diagram. It is the building block
of all object oriented software systems. We use class diagrams to depict the
static structure of a system by showing system’s classes, their methods and
attributes. Class diagrams also help us identify relationship between different
classes or objects.
4. Behavioral UML Diagrams
4.1. Activity Diagrams
We use Activity Diagrams to illustrate the flow of control in a system. We can
also use an activity diagram to refer to the steps involved in the execution of a
use case.
● We model sequential and concurrent activities using activity diagrams. So,
we basically depict workflows visually using an activity diagram.
● An activity diagram focuses on condition of flow and the sequence in which
it happens.
● We describe or depict what causes a particular event using an activity
diagram.
4.2. Use Case Diagrams
4.2. Use Case Diagrams
Use Case Diagrams are used to depict the functionality of a system or a part of
a system. They are widely used to illustrate the functional requirements of the
system and its interaction with external agents(actors).
● A use case is basically a diagram representing different scenarios where
the system can be used.
● A use case diagram gives us a high level view of what the system or a part
of the system does without going into implementation details.
Structural and behavioral diagrams
Structural and behavioral diagrams are two types of diagrams used in UML
(Unified Modeling Language) to represent different aspects of a system. While
both types of diagrams are used to model different aspects of a system, they
serve different purposes and focus on different aspects of the system.
What are Structural Diagrams?
Structural diagrams in UML (Unified Modeling Language) represent the static
structure of a system. They depict the components of the system and their
relationships. Structural diagrams are used to visualize the organization,
composition, and relationships between the different elements in a system.
What are Behavioral Diagrams?
Behavioral diagrams in UML represent the dynamic behavior of a system. They
describe how the components of a system interact with each other and how the
system responds to external stimuli. Behavioral diagrams are used to visualize
the behavior of a system over time.
Structural Diagrams Vs. Behavioral Diagrams
Below are the key differences between Structural and Behavioral diagrams.
Aspect Structural Behavioral
Diagrams Diagrams
Purpose Show the static Illustrate the dynamic
structure of the system behavior of the system
Focus Focus on the Focus on the
components, classes, interactions between
and their relationships components and
classes
Elements Classes, objects, Activities, states,
interfaces, components, messages, events, and
and their relationships their interactions
Representation Use class diagrams, Use activity diagrams,
object diagrams, state machine diagrams,
component diagrams, sequence diagrams, etc.
etc.
Time Time-independent, as Time-dependent, as
they depict the they describe how the
structure at a specific system behaves over
moment time
Representation Use class diagrams, Use activity diagrams,
object diagrams, state machine diagrams,
component diagrams, sequence diagrams, etc.
etc.
Time Time-independent, as Time-dependent, as
they depict the they describe how the
structure at a specific system behaves over
moment time
Example Use Class diagrams for Sequence diagrams for
modeling the structure depicting the interaction
Cases of a software system between objects in a
specific scenario
_______________________________________________________________
________________________________________________
Computer Aided Software Engineering (CASE)
Computer-aided software engineering (CASE) is the implementation of
computer-facilitated tools and methods in software development. CASE is used
to ensure high-quality and defect-free software. CASE ensures a check-pointed
and disciplined approach and helps designers, developers, testers, managers,
and others to see the project milestones during development.
CASE can also help as a warehouse for documents related to projects, like
business plans, requirements, and design specifications. One of the major
advantages of using CASE is the delivery of the final product, which is more
likely to meet real-world requirements as it ensures that customers remain part
of the process.
CASE illustrates a wide set of labor-saving tools that are used in software
development. It generates a framework for organizing projects and to be helpful
in enhancing productivity. There was more interest in the concept of CASE tools
years ago, but less so today, as the tools have morphed into different functions,
often in reaction to software developer needs. The concept of CASE also
received a heavy dose of criticism after its release.
What is CASE Tools?
The essential idea of CASE tools is that in-built programs can help to analyze
developing systems in order to enhance quality and provide better outcomes.
Throughout the 1990, CASE tool became part of the software lexicon, and big
companies like IBM were using these kinds of tools to help create software.
Various tools are incorporated in CASE and are called CASE tools,
which are used to support different stages and milestones in a software
development life cycle.
Types of CASE Tools:
. Diagramming Tools: It helps in diagrammatic and graphical
representations of the data and system processes. It represents system
elements, control flow and data flow among different software components
.
and system structures in a pictorial form. For example, Flow Chart Maker
tool for making state-of-the-art flowcharts.
. Computer Display and Report Generators: These help in understanding
the data requirements and the relationships involved.
○ Analysis Tools: It focuses on inconsistent, incorrect specifications
involved in the diagram and data flow. It helps in collecting
requirements, automatically check for any irregularity, imprecision in
the diagrams, data redundancies, or erroneous omissions.
For example:
○ (i) Accept 360, Accompa, CaseComplete for requirement analysis.
○ (ii) Visible Analyst for total analysis.
. Central Repository: It provides a single point of storage for data diagrams,
reports, and documents related to project management.
. Documentation Generators: It helps in generating user and technical
documentation as per standards. It creates documents for technical users
and end users.
For example, Doxygen, DrExplain, Adobe RoboHelp for documentation.
. Code Generators: It aids in the auto-generation of code, including
definitions, with the help of designs, documents, and diagrams.
. Tools for Requirement Management: It makes gathering, evaluating, and
managing software needs easier.
. Tools for Analysis and Design: It offers instruments for modelling system
architecture and behaviour, which helps throughout the analysis and design
stages of software development.
. Tools for Database Management: It facilitates database construction,
design, and administration.
. Tools for Documentation: It makes the process of creating, organizing,
and maintaining project documentation easier.
Advantages of the CASE approach:
● Improved Documentation: Comprehensive documentation creation and
maintenance is made easier by CASE tools. Since automatically generated
documentation is usually more accurate and up to date, there are fewer
opportunities for errors and misunderstandings brought on by out-of-
current material.
● Reusing Components: Reusable component creation and maintenance are
frequently facilitated by CASE tools. This encourages a development
approach that is modular and component-based, enabling teams to shorten
development times and reuse tested solutions.
● Quicker Cycles of Development: Development cycles take less time when
certain jobs, such testing and code generation, are automated. This may
result in software solutions being delivered more quickly, meeting
deadlines and keeping up with changing business requirements.
● Improved Results: Code generation, documentation, and testing are just a
few of the time-consuming, repetitive operations that CASE tools perform.
Due to this automation, engineers are able to concentrate on more intricate
and imaginative facets of software development, which boosts output.
● Achieving uniformity and standardization: Coding conventions,
documentation formats and design patterns are just a few of the areas of
software development where CASE tools enforce uniformity and standards.
This guarantees consistent and maintainable software development.
Disadvantages of the CASE approach:
● Cost: Using a case tool is very costly. Most firms engaged in software
development on a small scale do not invest in CASE tools because they
think that the benefit of CASE is justifiable only in the development of large
systems.
● Learning Curve: In most cases, programmers’ productivity may fall in the
initial phase of implementation, because users need time to learn the
technology. Many consultants offer training and on-site services that can
be important to accelerate the learning curve and to the development and
use of the CASE tools.
● Tool Mix: It is important to build an appropriate selection tool mix to urge
cost advantage CASE integration and data integration across all platforms
is extremely important.
_____________________________________________________________________
__________________________________________
Software Testing- Test Case
Software testing is known as a process for validating and verifying the working
of a software/application. It makes sure that the software is working without any
errors, bugs, or any other issues and gives the expected output to the user. The
software testing process isn’t limited to finding faults in the present software
but also finding measures to upgrade the software in various factors such as
efficiency, usability, and accuracy. So, to test software the software testing
provides a particular format called a Test Case.
What is a Test Case?
A test case is a defined format for software testing required to check if a
particular application/software is working or not. A test case consists of a
certain set of conditions that need to be checked to test an application or
software i.e. in more simple terms when conditions are checked it checks if the
resultant output meets with the expected output or not. A test case consists of
various parameters such as ID, condition, steps, input, expected result, result,
status, and remarks.
Parameters of a Test Case:
● Module Name: Subject or title that defines the functionality of the test.
● Test Case Id: A unique identifier assigned to every single condition in a test
case.
● Tester Name: The name of the person who would be carrying out the test.
● Test Case Description: The condition required to be checked for a given
●
software. for eg. Check if only numbers validation is working or not for an
age input box.
● Prerequisite: The conditions required to be fulfilled before the start of the
test process.
● Test Priority: As the name suggests gives priority to the test cases that
had to be performed first, or are more important and that could be
performed later.
● Test Data: The inputs to be taken while checking for the conditions.
● Test Expected Result: The output which should be expected at the end of
the test.
● Test parameters: Parameters assigned to a particular test case.
● Actual Result: The output that is displayed at the end.
● Environment Information: The environment in which the test is being
performed, such as the operating system, security information, the
software name, software version, etc.
● Status: The status of tests such as pass, fail, NA, etc.
● Comments: Remarks on the test regarding the test for the betterment of
the software.
Types of Test Cases
● Functionality Test Case: The functionality test case is to determine if the
interface of the software works smoothly with the rest of the system and its
users or not. Black box testing is used while checking for this test case, as
we check everything externally and not internally for this test case.
● Unit Test Case: In unit test case is where the individual part or a single unit
of the software is tested. Here each unit/ individual part is tested, and we
create a different test case for each unit.
● User Interface Test Case: The UI test or user interface test is when every
component of the UI that the user would come in contact with is tested. It is
to test if the UI components requirement made by the user are fulfilled or
not.
● Integration Test Case: Integration testing is when all the units of the
software are combined and then they are tested. It is to check that each
component and its units work together without any issues.
● Performance Test Case: The performance test case helps to determine
response time as well as the overall effectiveness of the system/software.
It’s to see if the application will handle real-world expectations.
● Database Test Case: Also known as back-end testing or data testing
checks that everything works fine concerning the database. Testing cases
for tables, schema, triggers, etc. are done.
● Security Test Case: The security test case helps to determine that the
application restricts actions as well as permissions wherever necessary.
Encryption and authentication are considered as main objectives of the
security test case. The security test case is done to protect and safeguard
the data of the software.
● Usability Test Case: Also known as a user experience test case, it checks
how user-friendly or easy to approach a software would be. Usability test
●
cases are designed by the User experience team and performed by the
testing team.
● User Acceptance Test Case: The user acceptance case is prepared by the
testing team but the user/client does the testing and review if they work in
the real-world environment.
_______________________________________________________________
________________________________________________
What is a Test Suite?
Test suite is a container that has a set of tests which helps testers in executing
and reporting the test execution status. It can take any of the three states
namely Active, Inprogress and completed.
A Test case can be added to multiple test suites and test plans. After creating a
test plan, test suites are created which in turn can have any number of tests.
Test suites are created based on the cycle or based on the scope. It can
contain any type of tests, viz - functional or Non-Functional.
_______________________________________________________________
________________________________________________
What is Software Testing?
Software testing can be stated as the process of verifying and validating
whether a software or application is bug-free, meets the technical requirements
as guided by its design and development, and meets the user requirements
effectively and efficiently by handling all the exceptional and boundary cases.
The process of software testing aims not only at
finding faults in the existing software but also at finding measures to improve
the software in terms of efficiency, accuracy, and usability. The article focuses
on discussing Software Testing in detail.
Software testing can be divided into two steps:
. Verification: It refers to the set of tasks that ensure that the software
correctly implements a specific function. It means “Are we building the
product right?”.
. Validation: It refers to a different set of tasks that ensure that the software
that has been built is traceable to customer requirements. It means “Are we
building the right product?”.
Importance of Software Testing:
● Defects can be identified early: Software testing is important because if
there are any bugs they can be identified early and can be fixed before the
delivery of the software.
● Improves quality of software: Software Testing uncovers the defects in
the software, and fixing them improves the quality of the software.
● Increased customer satisfaction: Software testing ensures reliability,
security, and high performance which results in saving time, costs, and
customer satisfaction.
● Helps with scalability: Software testing type non-functional testing helps
to identify the scalability issues and the point where an application might
stop working.
● Saves time and money: After the application is launched it will be very
difficult to trace and resolve the issues, as performing this activity will incur
more costs and time. Thus, it is better to conduct software testing at
regular intervals during software development.
Different Types Of Software Testing
Different Types of Software Testing
. Manual Testing
. Automation Testing
1. Manual Testing
Manual testing is a technique to test the software that is carried out using the
functions and features of an application. In manual software testing, a tester
carries out tests on the software by following a set of predefined test cases. In
this testing, testers make test cases for the codes, test the software, and give
the final report about that software. Manual testing is time-consuming because
it is done by humans, and there is a chance of human errors.
Advantages of Manual Testing:
● Fast and accurate visual feedback: It detects almost every bug in the
software application and is used to test the dynamically changing GUI
designs like layout, text, etc.
● Less expensive: It is less expensive as it does not require any high-level
skill or a specific type of tool.
● No coding is required: No programming knowledge is required while using
the black box testing method. It is easy to learn for the new testers.
● Efficient for unplanned changes: Manual testing is suitable in case of
unplanned changes to the application, as it can be adopted easily.
2. Automation Testing
Automated Testing is a technique where the Tester writes scripts on their own
and uses suitable Software or Automation Tool to test the software. It is an
Automation Process of a Manual Process. It allows for executing repetitive tasks
without the intervention of a Manual Tester.
Advantages of Automation Testing:
● Simplifies Test Case Execution: Automation testing can be left virtually
unattended and thus it allows monitoring of the results at the end of the
process. Thus, simplifying the overall test execution and increasing the
efficiency of the application.
● Improves Reliability of Tests: Automation testing ensures that there is
equal focus on all the areas of the testing, thus ensuring the best quality
end product.
● Increases amount of test coverage: Using automation testing, more test
cases can be created and executed for the application under test. Thus,
resulting in higher test coverage and the detection of more bugs. This
allows for the testing of more complex applications and more features can
be tested.
● Minimizing Human Interaction: In automation testing, everything is
automated from test case creation to execution thus there are no changes
for human error due to neglect. This reduces the necessity for fixing
glitches in the post-release phase.
Types of Manual Testing
. White Box Testing
. Black Box Testing
. Gray Box Testing
1. White Box Testing
White box testing techniques analyze the internal structures the used data
structures, internal design, code structure, and the working of the software
rather than just the functionality as in black box testing. It is also called glass
box testing clear box testing or structural testing. White Box Testing is also
known as transparent testing or open box testing.
White box testing is a software testing technique
that involves testing the internal structure and workings of a software
application. The tester has access to the source code and uses this knowledge
to design test cases that can verify the correctness of the software at the code
level.
Advantages of Whitebox Testing:
● Thorough Testing: White box testing is thorough as the entire code and
structures are tested.
● Code Optimization: It results in the optimization of code removing errors
and helps in removing extra lines of code.
● Early Detection of Defects: It can start at an earlier stage as it doesn’t
●
require any interface as in the case of black box testing.
● Integration with SDLC: White box testing can be easily started in the
Software Development Life Cycle.
● Detection of Complex Defects: Testers can identify defects that cannot
be detected through other testing techniques.
2. Black Box Testing
Black-box testing is a type of software testing in which the tester is not
concerned with the internal knowledge or implementation details of the
software but rather focuses on validating the functionality based on the
provided specifications or requirements.
Advantages of Black Box Testing:
● The tester does not need to have more functional knowledge or
programming skills to implement the Black Box Testing.
● It is efficient for implementing the tests in the larger system.
● Tests are executed from the user’s or client’s point of view.
● Test cases are easily reproducible.
● It is used to find the ambiguity and contradictions in the functional
specifications.
3. Gray Box Testing
Gray Box Testing is a software testing technique that is a combination of
the Black Box Testing technique and the White Box Testing technique.
. In the Black Box Testing technique, the tester is unaware of the internal
structure of the item being tested and in White Box Testing the internal
structure is known to the tester.
. The internal structure is partially known in Gray Box Testing.
. This includes access to internal data structures and algorithms to design
the test cases.
Advantages of Gray Box Testing:
. Clarity of goals: Users and developers have clear goals while doing
testing.
. Done from a user perspective: Gray box testing is mostly done from the
user perspective.
. High programming skills not required: Testers are not required to have
high programming skills for this testing.
. Non-intrusive: Gray box testing is non-intrusive.
. Improved product quality: Overall quality of the product is improved.
Types of Black Box Testing
. Functional Testing
. Non-Functional Testing
1. Functional Testing
Functional Testing is a type of Software Testing in which the system is tested
against the functional requirements and specifications. Functional testing
ensures that the requirements or specifications are properly satisfied by the
application. This type of testing is particularly concerned with the result of
processing. It focuses on the simulation of actual system usage but does not
develop any system structure assumptions. The article focuses on discussing
function testing.
Benefits of Functional Testing:
● Bug-free product: Functional testing ensures the delivery of a bug-free
and high-quality product.
● Customer satisfaction: It ensures that all requirements are met and
ensures that the customer is satisfied.
● Testing focussed on specifications: Functional testing is focussed on
specifications as per customer usage.
● Proper working of application: This ensures that the application works as
expected and ensures proper working of all the functionality of the
application.
● Improves quality of the product: Functional testing ensures the security
and safety of the product and improves the quality of the product.
2. Non-Functional Testing
Non-functional Testing is a type of Software Testing that is performed to
verify the non-functional requirements of the application. It verifies whether the
behavior of the system is as per the requirement or not. It tests all the aspects
that are not tested in functional testing. Non-functional testing is a software
testing technique that checks the non-functional attributes of the system. Non-
functional testing is defined as a type of software testing to check non-
functional aspects of a software application. It is designed to test the readiness
of a system as per nonfunctional parameters which are never addressed by
functional testing. Non-functional testing is as important as functional testing.
Benefits of Non-functional Testing
● Improved performance: Non-functional testing checks the performance of
the system and determines the performance bottlenecks that can affect the
performance.
● Less time-consuming: Non-functional testing is overall less time-
consuming than the other testing process.
● Improves user experience: Non-functional testing like Usability testing
checks how easily usable and user-friendly the software is for the users.
Thus, focus on improving the overall user experience for the application.
● More secure product: As non-functional testing specifically includes
security testing that checks the security bottlenecks of the application and
how secure is the application against attacks from internal and external
sources.
Types of Functional Testing
. Unit Testing
. Integration Testing
. System Testing
1. Unit Testing
Unit testing is a method of testing individual units or components of a software
application. It is typically done by developers and is used to ensure that the
individual units of the software are working as intended. Unit tests are usually
automated and are designed to test specific parts of the code, such as a
particular function or method. Unit testing is done at the lowest level of
the software development process, where individual units of code are tested in
isolation.
Unit Testing Tools
Here are some commonly used Unit Testing tools:
. Jtest
. Junit
. NUnit
. EMMA
. PHPUnit
Advantages of Unit Testing:
Some of the advantages of Unit Testing are listed below.
● It helps to identify bugs early in the development process before they
become more difficult and expensive to fix.
● It helps to ensure that changes to the code do not introduce new bugs.
● It makes the code more modular and easier to understand and maintain.
● It helps to improve the overall quality and reliability of the software.
● It’s important to keep in mind that Unit Testing is only one aspect of
software testing and it should be used in combination with other types of
testing such as integration testing, functional testing, and acceptance
testing to ensure that the software meets the needs of its users.
● It focuses on the smallest unit of software design. In this, we test an
individual unit or group of interrelated units. It is often done by the
programmer by using sample input and observing its corresponding
outputs.
2. Integration Testing
Integration testing is a method of testing how different units or components of
a software application interact with each other. It is used to identify and resolve
any issues that may arise when different units of the software are combined.
Integration testing is typically done after unit testing and before functional
testing and is used to verify that the different units of the software work
together as intended.
Different Ways of Performing Integration Testing:
Different ways of Integration Testing are discussed below.
● Top-down integration testing: It starts with the highest-level modules and
differentiates them from lower-level modules.
● Bottom-up integration testing: It starts with the lowest-level modules and
integrates them with higher-level modules.
Advantages of Integrating Testing
● It helps to identify and resolve issues that may arise when different units of
the software are combined.
● It helps to ensure that the different units of the software work together as
intended.
● It helps to improve the overall reliability and stability of the software.
● It’s important to keep in mind that Integration testing is essential for
complex systems where different components are integrated.
● As with unit testing, integration testing is only one aspect of software
testing and it should be used in combination with other types of testing
such as unit testing, functional testing, and acceptance testing to ensure
that the software meets the needs of its users.
Integration testing is of two types: (i) Top-down (ii) Bottom-up
3. System Testing
System testing is a type of software testing that evaluates the overall
functionality and performance of a complete and fully integrated software
solution. It tests if the system meets the specified requirements and if it is
suitable for delivery to the end-users. This type of testing is performed after
the integration testing and before the acceptance testing.
System Testing is a type of software testing that is performed on a completely
integrated system to evaluate the compliance of the system with the
corresponding requirements. In system testing, integration testing passed
components are taken as input. The goal of integration testing is to detect any
irregularity between the units that are integrated.
Advantages of System Testing:
● The testers do not require more knowledge of programming to carry out
this testing.
● It will test the entire product or software so that we will easily detect the
errors or defects that cannot be identified during the unit testing and
integration testing.
● The testing environment is similar to that of the real-time production or
business environment.
● It checks the entire functionality of the system with different test scripts
and also it covers the technical and business requirements of clients.
● After this testing, the product will almost cover all the possible bugs or
errors and hence the development team will confidently go ahead with
acceptance testing.
Types of Integration Testing
. Incremental Testing
. Non-Incremental Testing
1. Incremental Testing
Like development, testing is also a phase of SDLC (Software Development Life
Cycle). Different tests are performed at different stages of the development
cycle. Incremental testing is one of the testing approaches that is commonly
used in the software field during the testing phase of integration testing which
is performed after unit testing. Several stubs and drivers are used to test the
modules one after one which helps in discovering errors and defects in the
specific modules.
Advantages of Incremental Testing:
● Each module has its specific significance. Each one gets a role to play
during the testing as they are incremented individually.
● Defects are detected in smaller modules rather than denoting errors and
then editing and re-correcting large files.
● It’s more flexible and cost-efficient as per requirements and scopes.
● The customer gets the chance to respond to each building.
There are 2 Types of Incremental Testing
. Top-down Integration Testing
. Bottom-up Integration Testing
1. Top-down Integration Testing
Top-down testing is a type of incremental integration testing approach in
which testing is done by integrating or joining two or more modules by moving
down from top to bottom through the control flow of the architecture structure.
In these, high-level modules are tested first, and then low-level modules are
tested. Then, finally, integration is done to ensure that the system is working
properly. Stubs and drivers are used to carry out this project. This technique is
used to increase or stimulate the behavior of Modules that are not integrated
into a lower level.
Advantages Top Down Integration Testing
. There is no need to write drivers.
. Interface errors are identified at an early stage and fault localization is also
easier.
. Low-level utilities that are not important are not tested well and high-level
testers are tested well in an appropriate manner.
. Representation of test cases is easier and simpler once Input-Output
functions are added.
2. Bottom-up Integration Testing
Bottom-up Testing is a type of incremental integration testing approach in
which testing is done by integrating or joining two or more modules by moving
upward from bottom to top through the control flow of the architecture
structure. In these, low-level modules are tested first, and then high-level
modules are tested. This type of testing or approach is also known as inductive
reasoning and is used as a synthesis synonym in many cases. Bottom-up
testing is user-friendly testing and results in an increase in overall software
development. This testing results in high success rates with long-lasting
results.
Advantages of Bottom-up Integration Testing:
● It is easy and simple to create and develop test conditions.
● It is also easy to observe test results.
● It is not necessary to know about the details of the structural design.
● Low-level utilities are also tested well and are also compatible with the
object-oriented structure.
3. Big-Bang Integration Testing
It is the simplest integration testing approach, where all the modules are
combined and the functionality is verified after the completion of individual
module testing. In simple words, all the modules of the system are simply put
together and tested. This approach is practicable only for very small systems. If
an error is found during the integration testing, it is very difficult to localize the
error as the error may potentially belong to any of the modules being
integrated. So, debugging errors reported during Big Bang integration testing is
very expensive to fix.
Advantages:
. It is convenient for small systems.
. Simple and straightforward approach.
. Can be completed quickly.
. Does not require a lot of planning or coordination.
. May be suitable for small systems or projects with a low degree of
interdependence between components.
Disadvantages:
. There will be quite a lot of delay because you would have to wait for all the
modules to be integrated.
. High-risk critical modules are not isolated and tested on priority since all
modules are tested at once.
. Not Good for long projects.
. High risk of integration problems that are difficult to identify and diagnose.
. This can result in long and complex debugging and troubleshooting efforts.
. This can lead to system downtime and increased development costs.
. May not provide enough visibility into the interactions and data exchange
between components.
. This can result in a lack of confidence in the system’s stability and
reliability.
. This can lead to decreased efficiency and productivity.
. This may result in a lack of confidence in the development team.
. This can lead to system failure and decreased user satisfaction.
Types of Non-functional Testing
. Performance Testing
. Usability Testing
. Compatibility Testing
1. Performance Testing
Performance Testing is a type of software testing that ensures software
applications perform properly under their expected workload. It is a testing
technique carried out to determine system performance in terms of sensitivity,
reactivity, and stability under a particular workload.
Performance testing is a type of software testing that focuses on evaluating the
performance and scalability of a system or application. The goal of
performance testing is to identify bottlenecks, measure system performance
under various loads and conditions, and ensure that the system can handle the
expected number of users or transactions.
Advantages of Performance Testing:
● Performance testing ensures the speed, load capability, accuracy, and
other performances of the system.
● It identifies, monitors, and resolves the issues if anything occurs.
● It ensures the great optimization of the software and also allows many
users to use it at the same time.
● It ensures the client as well as the end-customer’s satisfaction.
Performance testing has several advantages that make it an important
aspect of software testing:
● Identifying bottlenecks: Performance testing helps identify bottlenecks in
the system such as slow database queries, insufficient memory, or network
congestion. This helps developers optimize the system and ensure that it
can handle the expected number of users or transactions.
2. Usability Testing
You design a product (say a refrigerator) and when it becomes completely
ready, you need a potential customer to test it to check it working. To
understand whether the machine is ready to come on the market, potential
customers test the machines. Likewise, the best example of usability testing is
when the software also undergoes various testing processes which is
performed by potential users before launching into the market. It is a part of the
software development lifecycle (SDLC).
Advantages and Disadvantages of Usability Testing:
Usability testing is preferred to evaluate a product or service by testing it with
the proper users. In Usability testing, the development and design teams will
use to identify issues before coding and the result will be earlier issues will be
solved. During a Usability test, you can,
● Learn if participants will be able to complete the specific task completely.
● identify how long it will take to complete the specific task.
● Gives excellent features and functionalities to the product
● Improves user satisfaction and fulfills requirements based on user feedback
● The product becomes more efficient and effective
3. Compatibility Testing
Compatibility testing is software testing that comes under the non functional
testing category, and it is performed on an application to check its compatibility
(running capability) on different platforms/environments. This testing is done
only when the application becomes stable. This means simply this compatibility
test aims to check the developed software application functionality on various
software, hardware platforms, networks browser etc. This compatibility testing
is very important in product production and implementation point of view as it
is performed to avoid future issues regarding compatibility.
Advantages of Compatibility Testing:
● It ensures complete customer satisfaction.
● It provides service across multiple platforms.
● Identifying bugs during the development process.
There are 4 Types of Performance Testing
. Load Testing
. Stress Testing
. Scalability Testing
. Stability Testing
1. Load Testing
Load testing determines the behavior of the application when multiple users
use it at the same time. It is the response of the system measured under
varying load conditions.
. The load testing is carried out for normal and extreme load conditions.
. Load testing is a type of performance testing that simulates a real-world
load on a system or application to see how it performs under stress.
. The goal of load testing is to identify bottlenecks and determine the
maximum number of users or transactions the system can handle.
. It is an important aspect of software testing as it helps ensure that the
system can handle the expected usage levels and identify any potential
issues before the system is deployed to production.
Advantages of Load Testing:
Load testing has several advantages that make it an important aspect of
software testing:
. Identifying bottlenecks: Load testing helps identify bottlenecks in the
system such as slow database queries, insufficient memory, or network
congestion. This helps developers optimize the system and ensure that it
can handle the expected number of users or transactions.
. Improved scalability: By identifying the system’s maximum capacity, load
testing helps ensure that the system can handle an increasing number of
users or transactions over time. This is particularly important for web-
.
based systems and applications that are expected to handle a high volume
of traffic.
. Improved reliability: Load testing helps identify any potential issues that
may occur under heavy load conditions, such as increased error rates or
slow response times. This helps ensure that the system is reliable and
stable when it is deployed to production.
2. Stress Testing
In Stress Testing, we give unfavorable conditions to the system and check how
it perform in those conditions.
Example:
. Test cases that require maximum memory or other resources are executed.
. Test cases that may cause thrashing in a virtual operating system.
. Test cases that may cause excessive disk requirement Performance
Testing.
It is designed to test the run-time performance of software within the context
of an integrated system. It is used to test the speed and effectiveness of the
program. It is also called load testing. In it, we check, what is the performance
of the system in the given load.
3. Scalability Testing
Scalability Testing is a type of non-functional testing in which the
performance of a software application, system, network, or process is tested in
terms of its capability to scale up or scale down the number of user request
load or other such performance attributes. It can be carried out at a hardware,
software or database level. Scalability Testing is defined as the ability of a
network, system, application, product or a process to perform the function
correctly when changes are made in the size or volume of the system to meet a
growing need. It ensures that a software product can manage the scheduled
increase in user traffic, data volume, transaction counts frequency, and many
other things. It tests the system, processes, or database’s ability to meet a
growing need.
Advantages of Scalability Testing:
● It provides more accessibility to the product.
● It detects issues with web page loading and other performance issues.
● It finds and fixes the issues earlier in the product which saves a lot of time.
● It ensures the end-user experience under the specific load. It provides
customer satisfaction.
● It helps in effective tool utilization tracking.
4. Stability Testing
Stability Testing is a type of Software Testing to checks the quality and
behavior of the software under different environmental parameters. It is defined
as the ability of the product to continue to function over time without failure.
It is a Non-functional Testing technique that focuses on stressing the software
component to the maximum. Stability testing is done to check the efficiency of
a developed product beyond normal operational capacity which is known as
break point. It has higher significance in error handling, software reliability,
robustness, and scalability of a product under heavy load rather than checking
the system behavior under normal circumstances.
Stability testing assesses stability problems. This testing is mainly intended to
check whether the application will crash at any point in time or not.
Advantages of Stability Testing:
. It gives the limit of the data that a system can handle practically.
. It provides confidence on the performance of the system.
. It determines the stability and robustness of the system under load.
. Stability testing leads to a better end-user experience.
_______________________________________________________________
________________________________________________
UNIT TESTING -> INTEGRATION TESTING -> SYSTEM TESTING ->
ACCEPTANCE TESTING
Acceptance Testing – Software Testing
Acceptance Testing is a method of software testing where a system is tested
for acceptability. The major aim of this test is to evaluate the compliance of the
system with the business requirements and assess whether it is acceptable for
delivery or not.
Standard Definition of Acceptance Testing
It is formal testing according to user needs, requirements, and business
processes conducted to determine whether a system satisfies the acceptance
criteria or not and to enable the users, customers, or other authorized entities
to determine whether to accept the system or not.
Acceptance Testing is the last phase of software testing performed after
System Testing and before making the system available for actual use.
Types of Acceptance Testing
1. User Acceptance Testing (UAT)
User acceptance testing is used to determine whether the product is working
for the user correctly. Specific requirements which are quite often used by the
customers are primarily picked for testing purposes. This is also termed
as End-User Testing.
2. Business Acceptance Testing (BAT)
BAT is used to determine whether the product meets the business goals and
purposes or not. BAT mainly focuses on business profits which are quite
challenging due to the changing market conditions and new technologies, so
the current implementation may have to being changed which results in extra
budgets.
3. Contract Acceptance Testing (CAT)
CAT is a contract that specifies that once the product goes live, within a
predetermined period, the acceptance test must be performed, and it should
pass all the acceptance use cases. Here is a contract termed a Service Level
Agreement (SLA), which includes the terms where the payment will be made
only if the Product services are in-line with all the requirements, which means
the contract is fulfilled. Sometimes, this contract happens before the product
goes live. There should be a well-defined contract in terms of the period of
testing, areas of testing, conditions on issues encountered at later stages,
payments, etc.
4. Regulations Acceptance Testing (RAT)
RAT is used to determine whether the product violates the rules and regulations
that are defined by the government of the country where it is being released.
This may be unintentional but will impact negatively on the business. Generally,
the product or application that is to be released in the market, has to go under
RAT, as different countries or regions have different rules and regulations
defined by its governing bodies. If any rules and regulations are violated for any
country then that country or the specific region then the product will not be
released in that country or region. If the product is released even though there
is a violation then only the vendors of the product will be directly responsible.
5. Operational Acceptance Testing (OAT)
OAT is used to determine the operational readiness of the product and is non-
functional testing. It mainly includes testing of recovery, compatibility,
maintainability, reliability, etc. OAT assures the stability of the product before it
is released to production.
6. Alpha Testing
Alpha testing is used to determine the product in the development testing
environment by a specialized testers team usually called alpha testers.
7. Beta Testing
Beta testing is used to assess the product by exposing it to the real end-users,
typically called beta testers in their environment. Feedback is collected from
the users and the defects are fixed. Also, this helps in enhancing the product to
give a rich user experience.
Use of Acceptance Testing
. To find the defects missed during the functional testing phase.
. How well the product is developed.
. A product is what actually the customers need.
. Feedback help in improving the product performance and user experience.
. Minimize or eliminate the issues arising from the production.
Advantages of Acceptance Testing
. This testing helps the project team to know the further requirements from
the users directly as it involves the users for testing.
. Automated test execution.
. It brings confidence and satisfaction to the clients as they are directly
involved in the testing process.
. It is easier for the user to describe their requirement.
. It covers only the Black-Box testing process and hence the entire
functionality of the product will be tested.
Disadvantages of Acceptance Testing
. Users should have basic knowledge about the product or application.
. Sometimes, users don’t want to participate in the testing process.
. The feedback for the testing takes a long time as it involves many users and
the opinions may differ from one user to another user.
. Development team is not participated in this testing process.
_______________________________________________________________
________________________________________________
What Is a Top-Down Approach?
A top-down approach is a method or strategy of analysis, problem-solving, or
organization where the process begins at the highest conceptual level and
progresses to the details. This approach often contrasts with the bottom-up
approach, which starts with the details and works upwards to form a
comprehensive view or solution. Here are some key aspects of the top-down
approach:
. Overview First: The top-down approach starts with a broad overview or
general outline of the system, project, or problem. This includes defining
main objectives and goals before diving into specifics.
. Breaking Down: After establishing a high-level perspective, the next step
involves breaking down the larger system or problem into smaller, more
manageable components or tasks. This division continues until the desired
level of detail is achieved.
. Simplifies Complexity: This approach starts with a macro view, simplifying
complex systems or problems, making them easier to understand and
manage by showing how different parts relate to the whole.
. Focus on Priorities: It allows managers or decision-makers to focus on key
priorities and strategic alignments from the outset, which can guide the
detailed work that follows.
. Decision Making: In planning and decision-making, top-down approaches
align lower-level activities and decisions with the overarching goals or
policies decided at higher levels.
Benefits of a Top-Down Approach
The top-down approach offers several benefits across different fields,
from project management to software development. Here are some of the key
advantages:
. Clear Vision and Direction: Starting from the top allows leaders to set
clear objectives and establish a vision for the entire project or organization.
This helps ensure that all efforts are aligned with the overarching goals,
providing a consistent direction that guides all subsequent actions and
decisions.
. Simplified Decision Making: The top-down approach simplifies decision-
making processes by focusing on the big picture and main priorities. It
helps filter out less relevant issues and concentrate resources on what truly
matters, improving efficiency and effectiveness.
. Easier Management and Control: This approach facilitates management
and control as the hierarchy and roles are clearly defined from the outset.
Higher-level managers can more easily oversee and coordinate various
parts of a project or organization since each lower level's activities are
designed to align with top-level objectives.
. Facilitates Planning and Allocation of Resources: With a comprehensive
view from the top, it becomes easier to plan and allocate resources
effectively across different parts of the organization or project. Leaders can
assess needs and distribute resources in a manner that supports the most
critical aspects first.
. Improves Communication: A top-down approach can
streamline communication by clarifying what information needs to flow
between different levels of the organization. It ensures that all members are
on the same page and that important messages and strategies are
.
communicated clearly and directly from the top.
. Quick Implementation: In some cases, especially where rapid decision-
making is critical, the top-down approach allows for quicker implementation
of policies and decisions since directives come from the top and move
down without requiring extensive consultations at every level.
. Reduces Complexity: This approach can reduce complexity by breaking
down large projects or problems into smaller, more manageable parts after
defining the main goals and structures. This simplifies understanding and
executing tasks
What Is a Bottom-Up Approach?
A bottom-up approach is a strategy used across various fields, including
management, software development, and project planning, where the process
begins at the most detailed and basic level and works upwards to form a
comprehensive picture or solution. This approach often contrasts with the top-
down approach, which starts at the highest conceptual level and progresses to
the details. Here are the key aspects of the bottom-up approach:
. Detail-Oriented Start: The bottom-up approach starts at the grassroots
level, focusing on specific details, small components, or individual elements
before integrating them into a larger system or conclusion.
. Incremental Development: This method involves building systems
incrementally, piece by piece, ensuring that each component works
properly before integrating it into the larger system. This helps identify and
fix issues at an early stage.
. Empowerment and Participation: The bottom-up approach encourages
participation and decision-making from the lower levels of the organization
or group. This can increase engagement, innovation, and morale as
individuals feel their contributions are valued.
. Local Insights and Adaptability: The bottom-up approach starts at the
grassroots level and takes advantage of local knowledge and expertise.
This can be particularly beneficial in solving complex problems that require
a nuanced understanding of specific contexts.
. Flexibility: This approach allows for more flexibility as changes can be
made more easily at the lower levels without needing extensive revisions to
a top-level plan.
. Problem-Solving: In a bottom-up approach, problem-solving is often more
effective because it's done at the level where the problems occur, allowing
for more accurate and tailored solutions.
Benefits of a Bottom-Up Approach
The bottom-up approach offers several benefits, particularly when flexibility,
innovation, and detailed insight are crucial. Here are some key advantages of
using a bottom-up approach:
. Enhanced Innovation: A bottom-up approach can foster greater innovation
by involving team members closest to the problems or tasks. These
individuals often have unique insights and creative ideas that can lead to
novel solutions.
. Increased Employee Engagement: This approach empowers employees
by actively involving them in decision-making processes, which can boost
morale, increase job satisfaction, and reduce turnover. Employees are more
likely to be engaged when they feel their opinions are valued, and their
contributions can make a direct impact.
. Greater Flexibility and Responsiveness: Starting from the ground allows
organizations to adapt more to changes and challenges. Since decisions
are made closer to the operational level, responses can be quicker and
more tailored to the specific context or issue.
. Improved Problem-Solving: Problems are often identified and solved more
effectively when tackled by those who encounter them daily. The bottom-
up approach leverages individuals' hands-on experience to address issues
accurately and efficiently.
. Detailed and Comprehensive Understanding: As the process starts at
the ground level, it naturally incorporates a deeper understanding of all
aspects of the project or problem, ensuring no detail is overlooked. This
detailed scrutiny helps in building a thorough and robust overall picture.
. Democratization of the Workplace: It democratizes the workplace by
distributing decision-making authority. This can lead to a more inclusive
.
work culture where diverse perspectives are considered, leading to well-
rounded decisions.
. Better Risk Management: With more individuals involved in the analysis
and decision-making process, risks can be identified early and managed
more effectively from various angles and perspectives.
. Local Optimization: In a bottom-up approach, each component or part of a
project is optimized independently, leading to better performance and
efficiency at the local level positively impacting the overall outcome.
_______________________________________________________________
________________________________________________
Stubs:
● Definition: Stubs are placeholder implementations or simulated modules
that are used in place of actual modules or components that a module
being tested depends on.
● Purpose: Stubs are used when a module being tested relies on the
functionality of another module, which may not be available or fully
implemented at the time of testing.
● Functionality: Stubs provide basic functionality or predetermined
responses that allow the module being tested to proceed with its execution.
● Dependency: Stubs represent the dependent modules that the module
being tested relies on.
● Testing Focus: Stubs are primarily used in top−down testing approaches,
where higher−level modules are tested before lower−level modules.
● Example: In a client−server architecture, if the client module is being
tested, a stub can be used to simulate the server's functionality and
respond to the client's requests.
Drivers:
● Definition: Drivers are software components that enable the testing of a
module in isolation by simulating the behavior and functionality of the
higher−level modules that interact with it.
● Purpose: Drivers are used when a module being tested requires input or
interaction from other modules that are not yet developed or available.
● Functionality: Drivers provide the necessary input or interaction to the
module being tested to simulate the behavior of the higher−level modules.
● Dependency: Drivers represent the calling modules that interact with the
module being tested.
● Testing Focus: Drivers are commonly used in bottom−up testing
approaches, where lower−level modules are tested before higher−level
modules.
● Example: In a software system with a layered architecture, if the core
processing module is being tested, a driver can be used to simulate the
behavior of the user interface module and provide input to the core
processing module.
Stubs vs. Drivers :
Aspect Stubs Drivers
Purpose Used in top-down Used in bottom-up
integration testing. integration testing.
Functionality Provides a simplified Mimics the behavior of a
implementation of a higher-level module that
module or component the tested module
that the main module depends on.
depends on.
Dependency Stubs simulate called Drivers simulate calling
modules or modules or
components. components.
Integration Direction Used when the lower- Used when the higher-
level components are level components are
not yet developed or not yet developed or
available. available.
Development Timing Created by developers Created by developers
of higher-level of lower-level
components. components.
Focus Emphasizes testing the Focuses on testing the
main module in interaction of the main
Integration Direction Used when the lower- Used when the higher-
level components are level components are
not yet developed or not yet developed or
available. available.
Development Timing Created by developers Created by developers
of higher-level of lower-level
components. components.
Focus Emphasizes testing the Focuses on testing the
main module in interaction of the main
isolation. module with lower-level
modules.
Interface Stubs expose the same Drivers have the same
interface as the actual interface as the actual
module but with module but with
simplified behavior. simplified behavior.
Implementation Contains minimal code Contains minimal code
to satisfy function calls. to generate calls and
interactions.
Complexity Stubs are usually Drivers can be more
simpler than the actual complex to simulate
modules. interactions accurately.
Types Dummy stubs, constant Event drivers, call
stubs, behavior stubs, drivers, data drivers,
etc. etc.
Error Handling Stubs may not handle Drivers may not handle
errors or exceptions like errors or exceptions like
the actual module. the actual module.
Testing Focus Stubs assist in testing Drivers assist in testing
higher-level modules lower-level modules
without worrying about while ignoring higher-
lower-level logic. level complexities.
_______________________________________________________________
________________________________________________
ERP (Enterprise Resource Planning)—>>
ERP stands for Enterprise Resource Planning. ERP systems are the kind of
software tools which are used to manage the data of an enterprise. ERP system
helps different organizations to deal with different departments of an
enterprise. Different departments like receiving, inventory management,
customer order management, production planning, shipping, accounting,
human resource management, and other business functions. Basically, it is the
practice of consolidating an enterprise’s planning, its manufacturing, its sales
and marketing efforts into one management system. It combines all databases
across different departments into a single database which can be easily
accessible to all employees of that enterprise. It helps in automation of the
tasks involved in performing a business process.
Before ERP:
Before an ERP system, there are different databases of different departments
which they managed by their own. The employees of one department does not
know about anything about other department.
After ERP :
Figure – After ERP
After ERP system, databases of different departments are managed by one
system called ERP system. It keep tracks of all the database within system. In
this scenario, employee of one department have information regarding the
other departments.
Vendors of ERP:
● Baan
● JD Edwards
● Oracle
● PeopleSoft
● SAP
Benefits of ERP:
. This system helps in improving integration.
. It is the flexible system.
. This system improved speed and efficiency.
. There is a complete access to information.
. Lower total costs in complete supply chain.
. This system helps in Shortening the throughput times.
. There is sustained involvement and commitment of the top management.
. Enhanced Decision-Making: ERP provides real-time access to critical
business data, enabling decision-makers to quickly identify and respond to
issues, make informed decisions, and improve business outcomes.
. Improved Collaboration: ERP facilitates collaboration and communication
between different departments and stakeholders, enabling them to work
together effectively towards common business goals.
. Standardization of Processes: ERP ensures that business processes are
standardized across the organization, reducing the risk of errors and
inconsistencies and improving efficiency.
. Effective Resource Management: ERP enables efficient management of
resources such as personnel, equipment, and inventory, ensuring optimal
utilization and reducing wastage.
. Scalability: ERP is highly scalable and can be customized to meet the
evolving needs of the business, ensuring that the system remains relevant
and effective over the long term.
. Regulatory Compliance: ERP systems can help businesses comply with
regulatory requirements by providing accurate and timely reporting,
ensuring data privacy and security, and facilitating audits.
Limitations of ERP:
ERP system has 3 significant limitations:
. Managers generate custom reports or queries only with the help from a
programmer and this will create a problem that they did not receive
information quickly, which is essential for making a competitive advantage.
. There is no proper decision-making scenario i.e. this systems provide only
the current status, such as open orders. Whenever there is need to look for
past status to find trends and patterns it become [Link] aid better
decision-making.
. No doubt that data is integrated within the system, but there is no
integration of data with other enterprise or division systems and it does not
include external intelligence.
.
include external intelligence.
. High implementation costs: Implementing an ERP system can be expensive
and time-consuming. It requires significant investment in hardware,
software, and personnel, as well as training and consulting costs.
. Complex customization: Customizing an ERP system to meet the specific
needs of an organization can be complex and require specialized
knowledge. This can lead to delays and additional costs.
. Resistance to change: ERP systems often require significant changes to an
organization’s processes and workflows, which can be met with resistance
from employees who are comfortable with existing practices.
. Data security risks: Centralizing sensitive business data in an ERP system
creates potential security risks, especially if the system is not properly
secured or if there are vulnerabilities in the software.
. Dependence on vendor support: Organizations that use ERP systems are
often heavily dependent on the vendor for support, maintenance, and
upgrades. This can create a risk of vendor lock-in and limit an
organization’s ability to switch to other systems or providers.
_______________________________________________________________
________________________________________________
MRP (Material Requirements Planning)—>>
What is MRP :>
Flow Diagram
Advantages of MRP
Disadvantages of MRP
● Data quality
MRP systems rely heavily on accurate data, and if the data is incorrect, the
entire process can be compromised.
● Database maintenance
MRP systems require robust databases with accurate information about
inventory records and production schedules.
● End user training
MRP systems require proper training for end users to get the most out of
the system.
________________________________________________________
______________________________________________________
CRM (Customer Relationship Management):
CRM is a strategy to maintain the relationship with the existing customers as
well as future customers and retain them to drive the growth of the
organization. It is widely implemented in all growing industries.
CRMs are designed to
compile the company's information to contact customers, which includes the
company's website, email, phone number, products, services, live chat, etc. It
also gives detailed information about customers like their personal details,
phone number, purchasing history, comments, advice etc.
CRM involves E-Mail marketing and integration, documents, sales call,
relationship management etc.
Role of CRM :
● It sets bold aspirations i.e, a clear vision for the development of the
relationship.
● It sets and executes the client relationship strategy.
● It creates, manages and leads the team.
Functions of CRM :
● Customers needs
● Customers response
● Customers satisfaction
● Customers loyalty
● Customers retention
● Customers complaints
● Customers service
Advantages of CRM :
● Improved Customer Experience: It allows you to simplify your processes
from beginning to end as per the requirements and expectations of the
customers. It improves customers' experience and their relationship with
your company.
● Focused Marketing Efforts: It provides you with data related to your sales
pipelines and existing customers. So, instead of mass marketing, you can
focus your marketing efforts on key market segments.
● Improved Analytics Data and Reporting: It allows you to track and
analyze the buying habits of your customers. You can have automatic
access to all reports related to items or products sold out and customers
who bought them. Thus, you can analyze your customers and sale in a
●
month, quarter, year etc.
● Improved Coordination and Cooperation: It improves the coordination
among sales, marketing and customer service departments as they share a
common CRM platform and can work more cohesively or as a single unit.
● Automation of Tasks: There are a number of smaller tasks associated with
a process that must be completed in order to complete a task, e.g. form
filling, generating receipts, and sending reports to seniors while selling a
product. The CRM can complete most of such tasks that, allow sales
representatives to focus their efforts towards convincing customers and
closing deals faster.
Disadvantages of CRM :
● Security and privacy in the cloud.
● Limited control and flexibility.
● Increased vulnerability.
● It needs additional management to maintain.
● It may result in duplication of tasks.
_______________________________________________________________
_______________________________________________
Software maintenance SCM(Software Configuration Management)—>>
Software Maintenance Phase
_______________________________________________________________
________________________________________________
ISO 9000 Certification
The International organization for Standardization is a world wide federation of
national standard bodies. The International standards organization (ISO) is a
standard which serves as a for contract between independent parties. It
specifies guidelines for development of quality system.
Quality system of an organization means the various
activities related to its products or services. Standard of ISO addresses to both
aspects i.e. operational and organizational aspects which includes
responsibilities, reporting etc. An ISO 9000 standard contains set of guidelines
of production process without considering product itself.
Why ISO Certification required by Software Industry?
There are several reasons why software industry must get an ISO certification.
Some of reasons are as follows :
● This certification has become a standards for international bidding.
● It helps in designing high-quality repeatable software products.
● It emphasis need for proper documentation.
● It facilitates development of optimal processes and totally quality
measurements.
Features of ISO 9001 Requirements :
● Document control –
All documents concerned with the development of a software product
should be properly managed and controlled.
● Planning –
Proper plans should be prepared and monitored.
● Review –
For effectiveness and correctness all important documents across all
phases should be independently checked and reviewed .
● Testing –
The product should be tested against specification.
● Organizational Aspects –
Various organizational aspects should be addressed e.g., management
reporting of the quality team.
Advantages of ISO 9000 Certification :
Some of the advantages of the ISO 9000 certification process are following :
● Business ISO-9000 certification forces a corporation to specialize in “how
they are doing business”. Each procedure and work instruction must be
documented and thus becomes a springboard for continuous improvement.
● Employees morale is increased as they’re asked to require control of their
processes and document their work processes
● Better products and services result from continuous improvement process.
● Increased employee participation, involvement, awareness and systematic
employee training are reduced problems.
_______________________________________________________________
________________________________________________
Capability Maturity Model (CMM) – Software Engineering
Capability Maturity Model (CMM) was developed by the Software Engineering
Institute (SEI) at Carnegie Mellon University in 1987. It is not a software process
model. It is a framework that is used to analyze the approach and techniques
followed by any organization to develop software products. It also provides
guidelines to enhance further the maturity of the process used to develop
those software products.
Levels of Capability Maturity Model (CMM)
There are 5 levels of Capability Maturity Models. We will discuss each one of
them in detail.
Level-1: Initial—>>>
● No KPIs defined.
● Processes followed are Adhoc and immature and are not well defined.
● Unstable environment for software development.
● No basis for predicting product quality, time for completion, etc.
● Limited project management capabilities, such as no systematic tracking of
schedules, budgets, or progress.
● We have limited communication and coordination among team members
and stakeholders.
● No formal training or orientation for new team members.
● Little or no use of software development tools or automation.
● Highly dependent on individual skills and knowledge rather than
standardized processes.
● High risk of project failure or delays due to a lack of process control and
stability.
Level-2: Repeatable—>>
● Focuses on establishing basic project management policies.
● Experience with earlier projects is used for managing new similar-natured
projects.
● Project Planning- It includes defining resources required, goals,
constraints, etc. for the project. It presents a detailed plan to be followed
systematically for the successful completion of good-quality software.
● Configuration Management- The focus is on maintaining the performance
of the software product, including all its components, for the entire
lifecycle.
● Requirements Management- It includes the management of customer
reviews and feedback which result in some changes in the requirement set.
It also consists of accommodation of those modified requirements.
● Subcontract Management- It focuses on the effective management of
qualified software contractors i.e. it manages the parts of the software
developed by third parties.
● Software Quality Assurance- It guarantees a good quality software
product by following certain rules and quality standard guidelines while
developing.
Level-3: Defined—->>
● At this level, documentation of the standard guidelines and procedures
takes place.
● It is a well-defined integrated set of project-specific software engineering
and management processes.
● Peer Reviews: In this method, defects are removed by using several review
methods like walkthroughs, inspections, buddy checks, etc.
● Intergroup Coordination: It consists of planned interactions between
different development teams to ensure efficient and proper fulfillment of
customer needs.
● Organization Process Definition: Its key focus is on the development and
maintenance of standard development processes.
● Organization Process Focus: It includes activities and practices that
should be followed to improve the process capabilities of an organization.
● Training Programs: It focuses on the enhancement of knowledge and skills
●
of the team members including the developers and ensuring an increase in
work efficiency.
Level-4: Managed⸺>>>
● At this stage, quantitative quality goals are set for the organization for
software products as well as software processes.
● The measurements made help the organization to predict the product and
process quality within some limits defined quantitatively.
● Software Quality Management: It includes the establishment of plans and
strategies to develop quantitative analysis and understanding of the
product’s quality.
● Quantitative Management: It focuses on controlling the project
performance quantitatively.
Level-5: Optimizing⸺>>>
● This is the highest level of process maturity in CMM and focuses on
continuous process improvement in the organization using quantitative
feedback.
● The use of new tools, techniques, and evaluation of software processes is
done to prevent the recurrence of known defects.
● Process Change Management: Its focus is on the continuous
improvement of the organization’s software processes to improve
productivity, quality, and cycle time for the software product.
● Technology Change Management: It consists of the identification and use
of new technologies to improve product quality and decrease product
development time.
● Defect Prevention It focuses on the identification of causes of defects and
prevents them from recurring in future projects by improving project-
defined processes.
_______________________________________________________________
________________________________________________
Class Diagram Component—>>>
● Upper Section: The upper section encompasses the name of the class. A
class is a representation of similar objects that shares the same
relationships, attributes, operations, and semantics. Some of the following
rules that should be taken into account while representing a class are given
below:
. Capitalize the initial letter of the class name.
. Place the class name in the center of the upper section.
. A class name must be written in bold format.
. The name of the abstract class should be written in italics format.
● Middle Section: The middle section constitutes the attributes, which
describe the quality of the class. The attributes have the following
characteristics:
. The attributes are written along with its visibility factors, which are
public (+), private (-), protected (#), and package (~).
. The accessibility of an attribute class is illustrated by the visibility
factors.
. A meaningful name should be assigned to the attribute, which will
explain its usage inside the class.
● Lower Section: The lower section contain methods or operations. The
methods are represented in the form of a list, where each method is written
in a single line. It demonstrates how a class interacts with data.
Relationships
In UML, relationships are of three types:
● Dependency: A dependency is a semantic relationship between two or
more classes where a change in one class cause changes in another class.
It forms a weaker relationship.
In the following example, Student_Name is dependent on the Student_Id.
● Generalization: A generalization is a relationship between a parent class
(superclass) and a child class (subclass). In this, the child class is inherited
from the parent class.
For example, The Current Account, Saving Account, and Credit Account are
the generalized form of Bank Account.
● Association: It describes a static or physical connection between two or
more objects. It depicts how many objects are there in the relationship.
For example, a department is associated with the college.
Multiplicity: It defines a specific range of allowable instances of attributes. In
case if a range is not specified, one is considered as a default multiplicity.
For example, multiple patients are admitted to one hospital.
Aggregation: An aggregation is a subset of association, which represents has
a relationship. It is more specific then association. It defines a part-whole or
part-of relationship. In this kind of relationship, the child class can exist
independently of its parent class.
The company encompasses a number of employees, and even if one employee
resigns, the company still exists.
Composition: The composition is a subset of aggregation. It portrays the
dependency between the parent and its child, which means if one part is
deleted, then the other part also gets discarded. It represents a whole-part
relationship.
A contact book consists of multiple contacts, and if you delete the contact
book, all the contacts will be lost.
Abstract Classes
In the abstract class, no objects can be a direct entity of the abstract class. The
abstract class can neither be declared nor be instantiated. It is used to find the
functionalities across the classes. The notation of the abstract class is similar
to that of class; the only difference is that the name of the class is written in
italics. Since it does not involve any implementation for a given function, it is
best to use the abstract class with multiple objects.
Let us assume that we have an abstract class named displacement with a
method declared inside it, and that method will be called as a drive (). Now,
this abstract class method can be implemented by any object, for example, car,
bike, scooter, cycle, etc.
_______________________________________________________________
________________________________________________
E-R DIagram—>>>
Components :
Rectangle — denotes entity
Ellipse — denotes attributes
Diamond — denotes relationship
Line — denotes link between entity or an attribute, entity or relationship
Double Ellipse — denotes multivalued attribute
Dashed Ellipse — denotes derived attribute
Double Rectangle — denotes weak entity
_______________________________________________________________
________________________________________________
UML Activity Diagram
In UML, the activity diagram is used to demonstrate the flow of control within
the system rather than the implementation. It models the concurrent and
sequential activities.
The activity diagram helps in envisioning the workflow from one activity to
another. It put emphasis on the condition of flow and the order in which it
occurs. The flow can be sequential, branched, or concurrent, and to deal with
such kinds of flows, the activity diagram has come up with a fork, join, etc.
It is also termed as an object-oriented flowchart. It encompasses activities
composed of a set of actions or operations that are applied to model the
behavioral diagram.
Components of an Activity Diagram
Following are the component of an activity diagram:
Activities
The categorization of behavior into one or more actions is termed as an activity.
In other words, it can be said that an activity is a network of nodes that are
connected by edges. The edges depict the flow of execution. It may contain
action nodes, control nodes, or object nodes.
The control flow of activity is represented by control nodes and object nodes
that illustrates the objects used within an activity. The activities are initiated at
the initial node and are terminated at the final node.
Activity partition /swimlane
The swimlane is used to cluster all the related activities in one column or one
row. It can be either vertical or horizontal. It used to add modularity to the
activity diagram. It is not necessary to incorporate swimlane in the activity
diagram. But it is used to add more transparency to the activity diagram.
Forks
Forks and join nodes generate the concurrent flow inside the activity. A fork
node consists of one inward edge and several outward edges. It is the same as
that of various decision parameters. Whenever a data is received at an inward
edge, it gets copied and split crossways various outward edges. It split a single
inward flow into multiple parallel flows.
Join Nodes
Join nodes are the opposite of fork nodes. A Logical AND operation is
performed on all of the inward edges as it synchronizes the flow of input across
one single output (outward) edge.
Notation of an Activity diagram
Activity diagram constitutes following notations:
Initial State: It depicts the initial stage or beginning of the set of actions.
Final State: It is the stage where all the control flows and object flows end.
Decision Box: It makes sure that the control flow or object flow will follow only
one path.
Action Box: It represents the set of actions that are to be performed.