S.E. Lab File
S.E. Lab File
& Engineering
LAB MANUAL
2. List of Experiments 3
3. Study and usage of openProj or similar software to Draft a Project Plan. 6-7
4. Study and usage of openProj or similar software to track the progress of 8-9
a Project.
9. To perform Various white box and black box testing techniques. 38-43
Computers are used in almost every aspect of our lives today. The computer industry is one of the fastest
growing segments of our economy and that growth promises to continue in future also. An engineering
degree in computer science and engineering (CSE) equips with strong theoretical as well as practical
knowledge of computer hardware and software.
Computer engineering is projected to be one of the fastest growing professions over the next decade. Strong
employment growth combined with a limited supply of qualified persons foresees remarkable employment
prospects in this field. Computer engineers have excellent career prospects in different areas like research
and development, information security, software development and programming, project management,
database management, system and network administration, data protection, consulting, software and
Hardware Management, Sales user support, marketing, market research, controlling and information
research, etc
COURSE OUTCOMES
To develop an understanding about the basics of Software Engineering.
Will able to develop software by using various tools like smart draw.
2. Study and usage of openProj or similar software to track the progress of a Project.
Title: - Study and usage of openProj or similar software to Draft a Project Plan.
H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron
METHOD :-
SmartDraw is a diagram tool used to make flowcharts, organization charts, mind maps, project charts, and
other business visuals. SmartDraw has two versions: an online edition and a downloadable edition for
Windows desktop.
SmartDraw offers a full suite of diagrams, templates, tools and symbols to design anything. Use SmartDraw
to make flowcharts, residential and commercial floor plans, organizational charts, CAD and engineering
diagrams, electrical designs, landscape designs, network diagrams, app and site mockups, wireframes, and
more. SmartDraw automates much of the drawing process, users can add, move, and delete shapes, and the
app is smart enough to realign and fix drawings as they are being worked on. Businesses can create a custom
org chart using a pre-built template, or by drawing it from scratch, and utilize tools such as intelligent
formatting, Visio file import, powerful automation, visual simbols, simple commands, and other intuitive
tools.
Features:
#Intelligent Formatting:- SmartDraw is the only diagramming app with an intelligent formatting engine.
Add, delete or move shapes and your diagram will automatically adjust and maintain its arrangement.
#Integrate with tools you use:- SmartDraw integrates easily with tools you already use. With just a click,
you can send your diagram directly to Microsoft Word®, Excel®, PowerPoint®, or Outlook®. Export to PDF
and other graphic formats. SmartDraw also has apps for G Suite and the Atlassian® stack: Confluence, Jira,
and Trello. You can also connect to AWS to generate a diagram. See all of SmartDraw's integrations.
#Engineering Power:- SmartDraw allows you to draw and print architectural and engineering diagrams to
scale. SmartDraw even provides an AutoCAD-like annotation layer that automatically resizes to match a
diagram. Most diagramming apps don't do this at all.
#Collaborate Anywhere:- SmartDraw is the only diagramming tool that runs in a web browser on any
platform (Mac, PC, or mobile device) that you can also install behind a firewall on a Windows desktop and
move seamlessly between them.
You and your team can work on the same diagram using SmartDraw. You can also share files with non
SmartDraw users. SmartDraw also works seamlessly with popular file sharing apps like Dropbox®, Box®,
Google Drive™ or OneDrive®.
#Pros of SmartDraw:
SmartDraw is a very complete plication for all types of diagrams that you need, has great virtues with
respect to Visio that can be considered the number one in the market or at least used, when you need to
diagram a project from the basic algorithm to the more complex gantt diagrams , this tool is very useful, it
makes you much easier in work (in my experience much easier to use than Visio) the connection between
diagrams is simpler, since finishing a diagram makes it much easier to do the following, has a wide range of
options and templates that help you in the creation process, contributing to the development of diagrams for
systems that you are going to develop, a very useful tool for projects where you need to correctly capture
what you want to develop, use it, try it for yourself.
#Cons of SmartDraw:
Its main against is the price, the full version is very expensive although it allows you to enjoy the desktop
version and its cost is very high, I think you should download it to compete more in the market. The speed
of the online version sometimes fluctuates negatively, it is a mistake that must be corrected
RESULT:
This experiment introduced the basic concepts of smart draw components through which project planning
can be done and its various notations are used to plan a project.
Experiment No.2
Objective :- To get familiar with the basic smart draw components for project management.
H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron
Method:-
Finding the right project management solution for your team can be very hard. Finding an open source
project management solution may be even harder. That's the mission of solution that allows teams to
collaborate throughout the project life cycle. Additionally, the project aims to replace proprietary software
like Microsoft Project Server or Jira.
1. Establish and promote an active and open community of developers, users, and companies for continuous
development of the open source project.
2. Define and develop the project vision, the code of conduct, and principles of the application.
The mission of OpenProject can be quickly summarized: we want to build excellent open source project
collaboration software. And when I say open source, I meant it. We strive to make OpenProject a place to
participate, collaborate, and get involved—with an active, open- minded, transparent, and innovative
community. Companies have finally become aware of the importance of project management software and
also the big advantages of open source. But why is it that project teams still tend to switch to old-fashioned
ways of creating project plans, task lists, or status reports with Excel, PowerPoint, or Word—or having other
expensive proprietary project management software in use? We want to offer a real open source alternative
for companies: free, secure, and easy to use.
RESULT:
This experiment introduced the concept of project monitoring using Gantt Charts
Experiment No.3
H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron
Method:-
SRS: -
An SRS minimizes the time and effort required by developers to achieve desired goals and also
minimizes the development cost. A good SRS defines how an application will interact with
system hardware, other programs and human users in a wide variety of real- world situations.
Parameters such as operating speed, response time, availability, portability, maintainability,
footprint, security and speed of recovery from adverse events are evaluated. Methods of
defining an SRS are described by the IEEE (Institute of Electrical and Electronics Engineers)
specification 830-1998.
Qualities of SRS: -
Correct
Unambiguous
Complete
Consistent
Ranked for importance and/or stability
Verifiable
Modifiable
Traceable
The below diagram depicts the various types of requirements that are captured during SRS.
Figure 3.1:-SRS requirements
ABSTRACT
"Blog" is an abbreviated version of "weblog," which is a term used to describe websites that
maintain an ongoing chronicle of information. A blog features diary-type commentary and links
to articles on other websites, usually presented as a list of entries in reverse chronological order.
Many people are still confused over what constitutes a blog over a website. Part of the problem
is that many businesses use both, integrating them into a single web presence. But there are two
features of a blog that set it apart from a traditional website.
1. Blogs are updated frequently. Whether it's a mommy blog in which a woman shares adventures in
parenting, a food blog sharing new recipes, or a business providing updates to its services, blogs have
new content added several times a week.
2. Blogs allow for reader engagement. Blogs are often included in social media because of the ability for
readers to comment and have a discussion with the blogger and others who read the blog makes it
social.
Why Blogging Is So Popular?
1. Search engines love new content, and as a result, blogging is a great search engine optimization
(SEO) tool.
2. Blogging provides an easy way to keep your customers and clients up-to-date on what's going on, let
them know about new deals, and provide tips. The more a customer comes to your blog, the more
likely they are to spend money.
3. A blog allows you to build trust and rapport with your prospects. Not only can you show off what you
know, building your expertise and credibility, but because people can post comments and interact
with you, they can get to know you, and hopefully, will trust you enough to buy from you.
4. Blogs can make money. Along with your product or service, blogs can generate income from
other
Project Introduction
Project is based on transaction and its management. The objectives of project are to maintain
transaction management and concurrency control .Basically project is based on real world
problem which is related to banking and transaction .It also provide security features related to
database like only authenticated accountant or user can access the database or perform the
transaction .
It is based on banking so it is related to accountant and customers which are the naïve users. In
this there are two types of GUI for different users and they provide different external views. It
holds the feature of database sharing. In this if two different users can perform concurrently if
they are authorized users and have the permissions to access the database. Basically it is based
on database sharing and transaction management of concurrency control.
Firstly accountant end work as the admin in this project. To add new user there any accountant
who add that account and person detail in the database and then there is key generate to access
the database with the password for the naïve users. With the help of that key and password user
can access the details and can perform the transaction on database with very user friendly
GUI.
On other end accountant has some more facilities like to update the details of the users. To update
any detail of the user then accountant is the person who can perform this task not the
naïve user. Accountant the person who can close the account of any customer who want to
close his or her account. After closing the detail of account is remain in database for some period of
time. With the trigger action after particular time period the data is automatically deleted
permanently.
Overall Description
7.) Constraints:
There is a backup for system.
The problem was analyzed and then design was prepared. Then we implemented this design through coding and
then testing is done. If any errors were found then we have tried our best to remove them and then again testing
was done so that we can remove all the errors of our project. This project will be maintained and upgraded time
to time so that we can provide proper and latest notes to all the users of this tutorial.
Figure: 2 (SDLC Cycle)
REQUIREMENTS SPECIFICATION
Prior to the software development efforts in any type of system it is very essential to
understand the requirements of the system and users. A complete specification of the software
is the 1st step in the analysis of system. Requirements analysis provides the designer with the
representation of function and procedures that can be translated into data, architecture and
procedural design. The goal of requirement analysis is to find out how current system is
working and of there are any areas where improvement is necessary and possible.
INTERFACE REQUIREMENTS: -
1.) User Interface The packet must be user friendly and robust. It must prompt the user with
proper message boxes to help them perform various actions and how to precede further the
system must respond normally under any input conditions and display proper message
instead of turning up faults and errors
2.) Software Specification Software is a set of program, documents, and procedure, routines
associated with computer system. Software is an essential complement to hardware. It is
the computer programs, when executed operates the hardware.
SYSTEM DESIGN: -
System design is the process of developing specifications for a candidate system that meet the criteria
established in the system analysis. Major step in system design is the preparation of the input forms and
the output reports in a form applicable to the user. The main objective of the system design is to make the
system user friendly.
DATABASE DESIGN: -
The overall objective in the development of the database technology has been to treat data as
an organizational resource and as an integrated whole. Database management system allows
data to be protected and organize separately from other resources. Database is an integrated
collection of data. The most significant of data as seen by the programs and data as stored on
the direct storage access storage devices. This is the difference between logical and physical
data.
The organization of data in the database aims to achieve three major objectives:
• Data Integration
• Data Integrity
• Data Independence
Methodology
The spiral model is similar to the incremental model, with more emphasis placed on risk
analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and
Evaluation. A software project repeatedly passes through these phases in iterations (called
Spirals in this model). The baseline spiral, starting in the planning phase, requirements is
gathered and risk is assessed. Each subsequent spiral build on the baseline spiral.
Phases of Spiral Model: -
§ Planning Phase: Requirements are gathered during the planning phase. Requirements like
‘BRS’ that is ‘Business Requirement Specifications’ and ‘SRS’ that is ‘System Requirement
specifications.
§ Risk Analysis: In the risk analysis phase, a process is undertaken to identify risk and
alternate solutions. A prototype is produced at the end of the risk analysis phase. If any risk
is found during the risk analysis, then alternate solutions are suggested and implemented.
§ Engineering Phase: In this phase software is developed, along with testing at the end of the
phase. Hence in this phase the development and testing are done.
§ Evaluation phase: This phase allows the customer to evaluate the output of the project to date
before the project continues to the next spiral.
FEASIBILITY ANALYSIS
A feasibility study is an analysis of how successfully a project can be completed, accounting for factors that
affect it such as economic, technological, legal and scheduling factors. Project managers use feasibility
studies to determine potential positive and negative outcomes of a project before investing a considerable
amount of time and money into it
TECHNICAL: -
Fundamentally, we are trying to answer the question “Can it actually be built?” To do this we
investigated the technologies to be used on the project. For each technology alternative that we
accessed we identified the advantages and disadvantages of it. By studying available resources
and requirements, we concluded that minimum the application should be made user-friendly.
ECONOMICAL: -
Keeping all these needs and demands of system in minimum budget we developed new software
which will not only lower their budget but also not require much cost to accept it. Keeping all
these needs and demands of system in minimum budget we developed new software which will
not only lower their budget but also not require much cost to accept it.
OPERATIONAL: -
The basic question that you are trying to answer is “Is it possible to maintain and support this
application once it is in production?” we developed a system which does not require any extra
technical skill and training. It is developed using such environment, which are quite familiar to
most of the people concerned with the system. The new system will prove easy to operate
because it is developed in such a way so that it will prove user friendly. User will find it quite
familiar and easy to operate.
BEHAVIOURAL: -
o Feasibility in terms of behavior of its employees.
o It reflects the behavior of the employees of an organization.
o Main focus is on teamwork and harmony among employees with no room for
discrimination and hatred.
In earlier time, the Blogger Concept was using the manual system, which has based on the
entries on the registers. The computerized integrated system from the existing system will have
the following advantage:
Handle volume of information.
Complexity of data
processing. Processing time
constant.
Computational demand.
Instantaneous queries.
Security features.
Schedule Of documents: -
development organizations formulate their own coding standards that suit them most, and require
their engineers to follow these standards rigorously. The purpose of requiring all engineers of an
organization to adhere to a standard style of coding is the following:
• A coding standard gives a uniform appearance to the codes written by different engineers.
• It enhances code understanding.
• It encourages good programming practices. A coding standard lists several rules to be followed
during coding, such as the way variables are to be
Important facts: -
Coding standards and guidelines: - Good software development organizations usually develop their own
coding standards and guidelines depending on what best suits their organization and the type of products they
develop. The following are some representative coding standard
Rules for limiting the use of global: - These rules list what types of data can be declared global and
what cannot.
Contents of the headers preceding codes for different modules: - The information contained in the
headers of different modules should be standard for an organization. The exact format in which the
header information is organized in the header can also be specified. The following are some standard
header data:
The code should be well-documented: - As a rule of thumb, there must be at least one comment line
on the average for every three-source line.
The length of any function should not exceed 10 source lines: - A function that is very lengthy is
usually very difficult to understand as it probably carries out many different functions. For the same
reason, lengthy functions are likely to have disproportionately larger number of bugs.
Do not use an identifier for multiple purposes: - Programmers often use the same identifier to
denote several temporary entities. For example, programmers use a temporary loop variable
for computing and a storing the final result. The rationale that is usually given by these
programmers for such multiple uses of variables is memory efficiency, e.g., three variables use
up three memory locations, whereas the same variable used in three different ways uses just one
memory location. However, there are several things wrong with this approach and hence should
be avoided. Some of the problems caused by use of variables for multiple purposes as follows:
• Each variable should be given a descriptive name indicating its purpose. This is not possible if an
identifier is used for multiple purposes. Use of a variable for multiple purposes can lead to confusion
and make it difficult for somebody trying to read and understand the code.
Use of variables for multiple purposes usually makes future enhancements more difficult
Do not use go-to statements: - Use of go to statements makes a program unstructured and makes it
very difficult to understand.
Code review: - Code review for a model is carried out after the module is successfully compiled
and the all the syntax errors have been eliminated. Code reviews are extremely cost-effective
strategies for reduction in coding errors and to produce high quality code. Normally, two types
of reviews are carried out on the code of a module. These two types code review techniques are
code inspection and code walk through.
Code Walk Throughs: - Code walk through is an informal code analysis technique. In this
technique, after a module has been coded, successfully compiled and all syntax errors
eliminated. A few members of the development team are given the code few days before the
walk-through meeting to read and understand code. Each member selects some test cases and
simulates execution of the code by hand (i.e., trace execution through each statement and
function execution). The main objectives of the walk through are to discover the algorithmic
and logical errors in the code. The members note down their findings to discuss these in a walk-
through meeting where the coder of the module is present.
Even though a code walks through is an informal analysis technique, several guidelines have
evolved over the years for making this naïve but useful analysis technique more effective. Of
course, these guidelines are based on personal experience, common sense, and several subjective
factors. Therefore, these guidelines should be considered as examples rather than accepted as
rules to be applied dogmatically. Some of these guidelines are the following.
• The team performing code walk through should not be either too big or too small. Ideally, it should
consist of between three to seven members.
• Discussion should focus on discovery of errors and not on how to fix the discovered errors.
• In order to foster cooperation and to avoid the feeling among engineers that they are being
evaluated in the code walk through meeting, managers should not attend the walk-through meetings.
Code Inspection: - In contrast to code walk through, the aim of code inspection is to discover some
common types of errors caused due to oversight and improper programming. In other words, during
code inspection the code is examined for the presence of certain kinds of errors, in contrast to the
hand simulation of code execution done in code walk throughs. For instance, consider the classical
error of writing a procedure that modifies a formal parameter while the calling routine calls that
procedure with a constant actual parameter. It is more likely that such an
error will be discovered by looking for these kinds of mistakes in the code, rather than by simply
hand simulating execution of the procedure. In
addition to the commonly made errors, adherence to coding standards is also checked during
code inspection. Good software development companies collect statistics regarding different
types of errors commonly committed by their engineers and identify the type of errors most
frequently committed. Such a list of commonly committed errors can be used during code
inspection to look out for possible errors.
Following is a list of some classical programming errors which can be checked during code
inspection:
Use of uninitialized variables.
Jumps into loops.
Non terminating loops.
Incompatible assignments.
Array indices out of bounds.
Improper storage allocation and deallocation.
Mismatches between actual and formal parameter in procedure calls
Use of incorrect logical operators or incorrect precedence among operators.
Improper modification of loop variables.
Comparison of equally of floating-point variables, etc.
RESULT: This experiment introduces the concept of understanding SRS, SDD and Testing
Document.
Experiment No.4
Title:- Preparation of the Software Configuration management and Risk management related documents.
H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron
Method:-
Software Configuration management
SCM or Software Configuration management is a Project Function (as defined in the SPMP)
with the goal to make technical and managerial activities more effective. Software configuration
management (SCM) is a software engineering discipline consisting of standard processes and
techniques often used by organizations to manage the changes introduced to its software products.
SCM helps in identifying individual elements and configurations, tracking changes, and version
selection, control, and baselining.
SCM is also known as software control management. SCM aims to control changes introduced to
large complex software systems through reliable version selection and version control.
1. To identify all the items that collectively defines the software configuration.
2. To manage changes to one or more of these items.
3. To facilitate the construction of different versions of an application.
4. To ensure that software quality is maintained as the configuration evolves over time.
A risk is any anticipated unfavorable event or circumstance that can occur while project is
being developed
1. The project manager needs to identify different type of risk in advance so that the
project deadlines don’t get extended
2. There are three main activities of risk management
Risk identification
1. The project manager needs to anticipate the risk in project as early as possible so
that the impact of risk can be minimized by using defective risk management
plans
2. Following are the main types of risk that need to be identified
3. Project risk: - these include
Resource related issues
Schedule problems
Budgetary issues
Staffing problem
Customer related issues
4. Technical risk:= includes
Potential design problems
Implementation and interfacing issues
Incomplete specification.
Changing specification and technical uncertainty
Ambiguous specification
Testing and maintenance problem
5. Business risk: -
Market trend changes
Developing a product similar to the existing applications
Personal commitments
3. In order to be able to successfully identify and foresee the different type of risk that
might affect a project it is good idea to have a company disaster list
4. The company disaster list contains all the possible risk or events that can occur in
similar projects
Risk assessment: -
1. The main objective of risk assessment is to rank the risk in terms of their damage causing
potential
2. The priority of each list can be computed using the equation p=r*s, where p is priority
with which the risk must be handled, r is probability of the risk becoming true and s is
severity of damage caused due to the risk becoming true
3. If all the identified risk are prioritized than most likely and damaging risk can be handled
first and others later on
Risk containment: -
1. Risk containment include planning the strategies to handle and face the most likely
and damaging risk first
2. Following are the strategies that can be used in general
a. Avoid the risk: - e.g.:- in case of having issues in designing phase with reference to
specified requirements, one can discuss withe customer to change the
specifications and avoid therisk
b. Transfer the risk: -
i. This includes purchasing and insurance coverage
ii. Getting the risky component developed by Third party
Risk reduction: - leverage factor:
a) The project manager must consider the cost of handling the risk and the
corresponding reduction of the risk
b) Risk leverage = (risk exposure before reduction - risk exposure after reduction) /
cost of reduction
RESULT:
Objective :-To get familiarize with Computer Aided Tools and techniques .
H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron
Method:-
CASE Tools: -
CASE stands for Computer Aided Software Engineering. It means, development and
maintenance of software projects with help of various automated software tools. CASE tools
are set of software application programs, which are used to automate the SDLC activities.
CASE tools are used by software project managers, analysts and engineers to develop
software system.
• Layer 1 is the user interface whose function is to help the user to interact with core of
the system. It provides a graphical user interface to users using which
interaction with the system become easy.
• Layer 2 depicts tool management system (TMS) which constitutes multiple tools of
different category using which automation of the development process can be done.
TMS may include some tools to draw diagrams or to generate test cases.
• Layer 3 represents object management system (OMS) which represents the set of objects
generated by the users. Group of design notations, set of test cases (test suite) are treated
as the objects.
• Layer 4 represents a repository which stores the objects developed by the user. Layer 4
is nothing but a database which stores automation files.
Components of CASE Tools: - CASE tools can be broadly divided into the following parts
based on their use at a particular SDLC stage:
Central Repository - CASE tools require a central repository, which can serve as a
source of common, integrated and consistent information. Central repository is a central
place of storage where product specifications, requirement documents, related reports
and diagrams, other useful information regarding management are stored. Central
repository also serves as data dictionary.
Upper Case Tools - Upper CASE tools are used in planning, analysis and design
stages of SDLC.
Lower Case Tools - Lower CASE tools are used in the implementation, testing
and maintenance stages.
Integrated Case Tools - Integrated CASE tools are helpful in all the stages of
SDLC, from Requirement gathering to Testing and documentation.
Figure: - 5.2
Types of CASE tools: - Major categories of CASE tools are:
– Diagram tools
– Project Management tools
– Documentation tools
– Web Development tools
– Quality Assurance tools
– Maintenance tools
Data Flow Diagram: A Data Flow Diagram (DFD) is a traditional visual representation of the
information flows within a system. A neat and clear DFD can depict the right amount of the system
requirement graphically. It can be manual, automated, or a combination of both. It shows how data
enters and leaves the system, what changes the information, and where data is stored.
The objective of a DFD is to show the scope and boundaries of a system as a whole. It may be used
as a communication tool between a system analyst and any person who plays a part in the order that
acts as a starting point for redesigning a system. The DFD is also called as a data flow graph or
bubble chart.
Types of DFD:
Data Flow Diagrams are either Logical or Physical.
Logical DFD - This type of DFD concentrates on the system process, and flow of data in the
system. For example, in a Banking software system, how data is moved between different
entities.
Physical DFD - This type of DFD shows how the data flow is actually implemented in the
system. It is more specific and closer to the implementation.
The importance and need of different levels of DFD in software design: - The main
reason why the DFD technique is so important and so popular is probably because of
the fact that DFD is a very simple formalism – it is simple to understand and use.
Starting with a set of high-level functions that a system performs, a DFD model
hierarchically represents various sub-functions. In fact, any hierarchical model is
simple to understand. Human mind is such that it can easily understand any
hierarchical model of a system – because in a hierarchical model, starting with a very
simple and abstract model of a system, different details of the system are slowly
introduced through different hierarchies. The data flow diagramming technique also
follows a very simple set of intuitive concepts and rules. DFD is an elegant modeling
technique that turns out to be useful not only to represent the results of structured
analysis of a software problem, but also for several other applications such as showing
the flow of documents or items in an organization.
Disadvantages of DFDs: -
Modification to a data layout in DFDs may cause the entire layout to be changed. This is
because the specific changed data will bring different data to units that it accesses.
Therefore, evaluation of the possible of the effect of the modification must be considered
first.
The number of units in a DFD in a large application is high. Therefore, maintenance is
harder, more costly and error prone. This is because the ability to access the data is passed
explicitly from one component to the other. This is why changes are impractical to be
made on DFDs especially in large system.
DFDs are inappropriate to use in a large system because if changes are to be made on a
specific unit, there is a possibility that the whole DFD need to be changed. This is because
the change may result in different data flow into the next unit. Therefore, the whole
application or system may need modification too.
LEVEL 0: -
The first thing we must do is model the main outputs and sources of data in the scenario
above. Then we draw the system box and name the system. Next, we identify the
information that is flowing to the system and from the system.
Level 0 of DFD of Blogger Concept
LEVEL 1: -
The next stage is to create the Level 1 Data Flow Diagram. This highlights the main functions
carried out by the system as follows:
We now create the Level 2 Data Flow Diagrams. First 'expand' the function boxes 1.1 and
1.2 so that we can fit the process boxes into them. Then position the data flows from Level 1
into the correct process in Level 2 as follows:
H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron
Method:-
Software testing can be stated as the process of verifying and validating that a software or
application is bug free, meets the technical requirements as guided by its design and development
and meets the user requirements effectively and efficiently with handling all the exceptional and
boundary cases.
The process of software testing aims not only at finding faults in the existing software but also at
finding measures to improve the software in terms of efficiency, accuracy and usability. It mainly
aims at measuring specification, functionality and performance of a software program or application.
Types of Software Testing:
1. Manual Testing: Manual testing includes testing a software manually, i.e., without using any
automated tool or any script. In this type, the tester takes over the role of an end-user and tests the
software to identify any unexpected behavior or bug. There are different stages for manual testing
such as unit testing, integration testing, system testing, and user acceptance testing.
Testers use test plans, test cases, or test scenarios to test a software to ensure the completeness of
testing. Manual testing also includes exploratory testing, as testers explore the software to
identify errors in it.
2. Automation Testing: Automation testing, which is also known as Test Automation, is when the
tester writes scripts and uses another software to test the product. This process involves automation
of a manual process. Automation Testing is used to re-run the test scenarios that were performed
manually, quickly, and repeatedly.
Apart from regression testing, automation testing is also used to test the application from load,
performance, and stress point of view. It increases the test coverage, improves accuracy, and
saves time and money in comparison to manual testing.
Hierarchy of Testing Levels:
Unit Testing: - Unit testing, a testing technique using which individual modules are
tested to determine if there are any issues by the developer himself. It is concerned with
functional correctness of the standalone modules. The main aim is to isolate each unit of
the system to identify, analyze and fix the defects.
Figure: - 6.1
Advantages of unit testing:
Defects are found at an early stage. Since it is done by the dev team by testing individual
pieces of code before integration, it helps in fixing the issues early on in source code
without affecting other source codes.
It helps maintain the code. Since it is done by individual developers, stress is being put on
making the code less inter dependent, which in turn reduces the chances of impacting
other sets of source code.
It helps in reducing the cost of defect fixes since bugs are found early on in the
development cycle.
It helps in simplifying the debugging process. Only latest changes made in the code need
to be debugged if a test case fails while doing unit testing.
Disadvantages:
It’s difficult to write good unit tests and the whole process may take a lot of time
A developer can make a mistake that will affect the whole system.
Not all errors can be detected, since every module it tested separately and later
different integration bugs may appear.
Testing will not catch every error in the program, because it cannot evaluate every
execution path in any but the most trivial programs. This problem is a superset of
the halting problem, which is un decidable.
The same is true for unit testing. Additionally, unit testing by definition only tests
the functionality of the units themselves. Therefore, it will not catch integration
errors or broader system-level errors (such as functions performed across multiple
units, or non-functional test areas such as performance).
Unit testing should be done in conjunction with other software testing activities, as
they can only show the presence or absence of particular errors; they cannot prove a
complete absence of errors.
To guarantee correct behavior for every execution path and every possible input,
and ensure the absence of errors, other techniques are required, namely the
application of formal methods to proving that a software component has no
unexpected behavior.
Unit Testing Techniques:
1. Black Box Testing - Using which the user interface, input and output are tested.
2. White Box Testing - used to test each one of those functions' behavior is tested.
3. Gray Box Testing - Used to execute tests, risks and assessment methods.
Integration Testing: - It tests integration or interfaces between components, interactions
to different parts of the system such as an operating system, file system and hardware or
interfaces between systems.
Also, after integrating two different components together we do the integration testing.
As displayed in the image below when two different modules ‘Module A’ and
‘Module B’ are integrated then the integration testing is done.
Fig 6.4
2. Top-down integration testing: Testing takes place from top to bottom, following the
control flow or architectural structure (e.g., starting from the GUI or main menu).
Components or systems are substituted by stubs. Below is the diagram of ‘Top-down
Approach”
Fig 6.5
3. Bottom-up integration testing: Testing takes place from the bottom of the control
flow upwards. Components or systems are substituted by drivers. Below is the image
of ‘Bottom-up approach’:
Fig6.6
4. System Testing: - System Testing (ST) is a black box testing technique performed
to evaluate the complete system the system's compliance against specified
requirements. In System testing, the functionalities of the system are tested from
an end-to-end perspective. System Testing is usually carried out by a team that is
independent of the development team in order to measure the quality of the
system unbiased. It includes both functional and Non-Functional testing.
Fig 6.7
RESULT: This experiment introduces the concept of unit and integration Testing.
37
Experiment No. 7
Title:- To perform various white box and black box testing techniques.
H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron
Method:-
White Box Testing:
White box testing techniques analyze the internal structures the used data structures, internal
design, code structure and the working of the software rather than just the functionality as in
black box testing. It is also called glass box testing or clear box testing or structural testing.
Working process of white box testing:
Statement coverage: In this technique, the aim is to traverse all statement at least
once. Hence, each line of code is tested. In case of a flowchart, every node must be
traversed at least once. Since all lines of code are covered, helps in pointing out faulty
code.
38
Branch Coverage: In this technique, test cases are designed so that each branch from all
decision points are traversed at least once. In a flowchart, all edges must be traversed at
least once.
1. READ X, Y
2. IF(X == 0 || Y == 0)
3. PRINT ‘0’
In this example, there are 2 conditions: X == 0 and Y == 0. Now, test these conditions get
TRUE and FALSE as their values. One possible example would be:
#TC1 – X = 0, Y = 55
#TC2 – X = 5, Y = 0
Path Testing:
In the path testing, we will write the flow graphs and test all independent paths. Here
writing the flow graph implies that flow graphs are representing the flow of the program
and also show how every program is added with one another as we can see in the below
image:
39
And test all the independent paths implies that suppose a path from main() to function G,
first set the parameters and test if the program is correct in that particular path, and in the
same way test all other paths and fix the bugs.
Flow graph notation: It is a directed graph consisting of nodes and edges. Each node represents
a sequence of statements, or a decision point. A predicate node is the one that represents a
decision point that contains a condition after which the graph splits. Regions are bounded by
nodes and edges.
Loop Testing: Loops are widely used and these are fundamental to many algorithms hence,
they're in the loop testing, we will test the loops such as while, for, and do-while, etc. and also
check for ending condition if working correctly and if the size of the conditions is enough.
For example: we have one program where the developers have given about 50,000 loops.
1. {
2. while(50,000)
3. ……
4. ……
5. }
We cannot test this program manually for all the 50,000 loops cycle. So we write a small
program that helps for all 50,000 cycles, as we can see in the below program, that test P is
written in the similar language as the source code program, and this is known as a Unit test. And
it is written by the developers only.
1. Test P
2. {
3. ……
4. …… }
40
Nested loops: For nested loops, all the loops are set to their minimum count and we start
from the innermost loop. Simple loop tests are conducted for the innermost loop and this is
worked outwards till all the loops have been tested.
1. Concatenated loops: Independent loops, one after another. Simple loop tests
are applied for each.
If they’re not independent, treat them like nesting.
Advantages:
1. White box testing is very thorough as the entire code and structures are tested.
2. It results in the optimization of code removing error and helps in removing extra lines of
code.
3. It can start at an earlier stage as it doesn’t require any interface as in case of black box
testing.
4. Easy to automate.
Disadvantages:
o The black box test is based on the specification of requirements, so it is examined in the
beginning.
o In the second step, the tester creates a positive test scenario and an adverse test scenario
by selecting valid and invalid input values to check that the software is processing them
correctly or incorrectly.
o In the third step, the tester develops various test cases such as decision table, all pairs test,
equivalent division, error estimation, cause-effect graph, etc.
o The fourth phase includes the execution of all test cases.
o In the fifth step, the tester compares the expected output against the actual output.
41
o In the sixth and final step, if there is any flaw in the software, then it is cured and
tested again.
Functional testing - This black box testing type is related to the functional
requirements of a system; it is done by software testers.
Non-functional testing - This type of black box testing is not related to testing of
specific functionality, but non-functional requirements such as performance, scalability,
usability.
Regression testing – Regression Testing is done after code fixes, upgrades or any
other system maintenance to check the new code has not affected the existing code.
42
Equivalence Class Testing: It is used to minimize the number of possible test cases
to an optimum level while maintains reasonable test coverage.
Boundary Value Testing: Boundary value testing is focused on the values at boundaries.
This technique determines whether a certain range of values are acceptable by the system
or not. It is very useful in reducing the number of test cases. It is most suitable for the
systems where an input is within certain ranges.
Decision Table Testing: A decision table puts causes and their effects in a matrix.
There is a unique combination in each column.
RESULT: This experiment introduces the concept of white box and black box testing.
43
Experiment No.8
H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron
METHOD:
Web testing is the name given to software testing that focuses on web applications.
Complete testing of a web-based system before going live can help address issues before the
system is revealed to the public.
Documentation testing
We should start with the preparatory phase, testing the documentation. The tester studies the
received documentation (analyzes the defined site functionality, examines the final layouts of the
site and makes a website test plan for further testing).
The main artifacts related to the website testing are analyzed on this stage:
Requirements
Test Plan
Test Cases
Traceability Matrix.
Functionality Testing
44
Functionality Testing of a Website is a process that includes several testing parameters like
user interface, APIs, database testing, security testing, client and server testing and basic website
functionalities. Functional testing is very convenient and it allows users to perform both manual
and automated testing. It is performed to test the functionalities of each feature on the website.
Test all links in your webpages are working correctly and make sure there are no broken links.
Links to be checked will include -
Outgoing links
Internal links
Anchor Links
Mail To Links
Scripting checks on the form are working as expected. For example- if a user does not fill
a mandatory field in a form an error message is shown.
Check default values are being populated
Once submitted, the data in the forms is submitted to a live database or is linked to
a working email address
Forms are optimally formatted for better readability
Test Cookies are working as expected. Cookies are small files used by websites to primarily
remember active user sessions so you do not need to log in every time you visit a website.
Cookie Testing will include
Testing cookies (sessions) are deleted either when cache is cleared or when they
reach their expiry.
Delete cookies (sessions) and test that login credentials are asked for when you next visit
the site.
Test HTML and CSS to ensure that search engines can crawl your site easily. This will include
Testing your end - to - end workflow/ business scenarios which takes the user through
a series of webpages to complete.
45
Test negative scenarios as well, such that when a user executes an unexpected
step, appropriate error message or help is shown in your web application.
Usability testing
Usability Testing has now become a vital part of any web-based project. It can be carried out by
testers like you or a small focus group similar to the target audience of the web application.
Menus, buttons or Links to different pages on your site should be easily visible and
consistent on all webpages.
User Interface (UI) testing is provided to verify the graphic user interface of your website
meets the specifications.
Compatibility (Configuration) testing is performed to test your website with each one of the
supported software and hardware configurations:
OS Configuration
Browser Configuration
Database Configuration
Cross-platform testing allows evaluating the work of your site in different OS (both desktop
and mobile): Windows, iOS/Mac OS, Linux, Android, and BlackBerry etc.
46
Cross-browser website testing methods help to verify the correct work of the site in different
browser configurations: Mozilla Firefox, Google Chrome, Internet Explorer, and Opera etc.
Database Testing:
Database is one critical component of your web application and stress must be laid to test it
thoroughly. Testing activities will include-
Performance testing
This will ensure your site works under all loads. Software Testing activities will include but not
limited to -
Security testing
Security Testing is vital for e-commerce website that store sensitive customer information like
credit cards. Testing Activities will include-
47
Experiments
beyond PTU
Syllabus
48
Experiment No. 9
H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron
METHOD:
1. Module Names and Numerical Identifiers
Each module must have a module name. Module names should consist of a transitive (or action)
verb and an object noun. Module names and numerical identifiers may be taken directly from
corresponding process names on Data Flow Diagrams or other process charts. The name of the
module and the numerical identifier is written inside the module rectangle. Other than the
module name and number, no other information is provided about the internals of the module.
2. Existing Module
Existing modules may be shown on a Structure Chart. An existing module is represented by
double vertical lines.
3. Unfactoring Symbol
49
An unfactoring symbol is a construct on a Structure Chart that indicates the module will not be a
module on its own but will be lines of code in the parent module. An unfactoring symbol is
represented with a flat rectangle on top of the module that will not be a module when the
program is developed.
An unfactoring symbol reduces factoring without having to redraw the Structure Chart. Use an
unfactoring symbol when a module that is too small to exist on its own has been included on the
Structure Chart. The module may exist because factoring was taken too far or it may be shown
to make the chart easier to understand. (Factoring is the separation of a process contained as
code in one module into a new module of its own).
RESULT: This experiment introduces the concept of creating modules in structured charts
50
Experiment No.10
Title:- Representing interrelationship in modules in structured chart.
H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron
METHOD:
Invocation is a connecting line that shows the interrelationship among two modules. Modules
are organized in levels and the invocation lines connect modules between levels. Early forms of
Structure Charts drew arrowheads at the end of a line between two modules to indicate that
program control is passed from one module to the second in the direction of the arrow. Since the
Structure Chart is hierarchical, arrowheads are not necessary. Control is always passed from the
higher level module to the lower level module. Also, eliminating arrowheads reduces clutter on
the chart.
51
The root level (i.e., top level) of the Structure Chart contains only one module. Control passes
level by level from the root to lower level modules. Control is always returned to the invoking
module. Control eventually returns to the root. There is a maximum of one control relationship
between any two modules. In other words, if Module A invokes Module B, Module B cannot
also invoke Module A.
If a module is called by more than one module and it is not an existing module, the module
which calls it the most is referred to as its parent. A module can only have one parent even
though it may be called by several modules.
Modules may communicate with other modules only by calling a module or through data or
control couples. Couples are information passed between two modules. A module may not
invoke a module that is higher in the hierarchy than the invoking module.
RESULT: This experiment introduces the concept of relationships among modules in structured
charts
52
Experiment No.11
H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron
ALGORITHM STEPS
Break the system into suitably tractable units by means of transaction analysis
Convert each unit into into a good structure chart by means of transform analysis
Link back the separate units into overall system implementation
Transaction Analysis
The transaction is identified by studying the discrete event types that drive the system. For
example, with respect to railway reservation, a customer may give the following transaction
stimulus:
The three transaction types here are: Check Availability (an enquiry), Reserve Ticket (booking)
and Cancel Ticket (cancellation). On any given time we will get customers interested in giving
any of the above transaction stimuli. In a typical situation, any one stimulus may be entered
53
through a particular terminal. The human user would inform the system her preference by
selecting a transaction type from a menu. The first step in our strategy is to identify such
transaction types and draw the first level breakup of modules in the structure chart, by creating
separate module to co-ordinate various transaction types. This is shown as follows:
The Main ( ) which is a over-all coordinating module, gets the information about what
transaction the user prefers to do through TransChoice. The TransChoice is returned as a
parameter to Main ( ). Remember, we are following our design principles faithfully in
decomposing our modules. The actual details of how GetTransactionType ( ) is not relevant for
Main ( ). It may for example, refresh and print a text menu and prompt the user to select a choice
and return this choice to Main ( ). It will not affect any other components in our breakup, even
when this module is changed later to return the same input through graphical interface instead of
textual menu. The modules Transaction1 ( ), Transaction2 ( ) and Transaction3 ( ) are the
coordinators of transactions one, two and three respectively. The details of these transactions are
to be exploded in the next levels of abstraction.
We will continue to identify more transaction centers by drawing a navigation chart of all input
screens that are needed to get various transaction stimuli from the user. These are to be factored
out in the next levels of the structure chart (in exactly the same way as seen before), for all
identified transaction centers.
Transform Analysis
Transform analysis is strategy of converting each piece of DFD (may be from level 2 or level 3,
etc.) for all the identified transaction centers. In case, the given system has only one transaction
(like a payroll system), then we can start transformation from level 1 DFD itself. Transform
analysis is composed of the following five steps [Page-Jones, 1988]:
54
The central transform is the portion of DFD that contains the essential functions of the system
and is independent of the particular implementation of the input and output. One way of
identifying central transform (Page-Jones, 1988) is to identify the centre of the DFD by pruning
off its afferent and efferent branches. Afferent stream is traced from outside of the DFD to a
flow point inside, just before the input is being transformed into some form of output (For
example, a format or validation process only refines the input – does not transform it). Similarly
an efferent stream is a flow point from where output is formatted for better presentation. The
processes between afferent and efferent stream represent the central transform (marked within
dotted lines above). In the above example, P1 is an input process, and P6 & P7 are output
processes. Central transform processes are P2, P3, P4 & P5 - which transform the given input
into some form of output.
To produce first-cut (first draft) structure chart, first we have to establish a boss module. A boss
module can be one of the central transform processes. Ideally, such process has to be more of a
coordinating process (encompassing the essence of transformation). In case we fail to find a boss
module within, a dummy coordinating module is created
55
In the above illustration, we have a dummy boss module “Produce Payroll” – which is named in
a way that it indicate what the program is about. Having established the boss module, the
afferent stream processes are moved to left most side of the next level of structure chart; the
efferent stream process on the right most side and the central transform processes in the middle.
Here, we moved a module to get valid timesheet (afferent process) to the left side (indicated in
yellow).
The two central transform processes are move in the middle (indicated in orange). By grouping
the other two central transform processes with the respective efferent processes, we have created
two modules (in blue) – essentially to print results, on the right side.
The main advantage of hierarchical (functional) arrangement of module is that it leads to
flexibility in the software. For instance, if “Calculate Deduction” module is to select deduction
rates from multiple rates, the module can be split into two in the next level – one to get the
selection and another to calculate. Even after this change, the “Calculate Deduction” module
would return the same value.
Expand the structure chart further by using the different levels of DFD. Factor down till you
reach to modules that correspond to processes that access source / sink or data stores. Once this
is ready, other features of the software like error handling, security, etc. has to be added. A
module name should not be used for two different modules. If the same module is to be used in
more than one place, it will be demoted down such that “fan in” can be done from the higher
levels. Ideally, the name should sum up the activities done by the module and its sub-ordinates.
Because of the orientation towards the end-product, the software, the finer details of how data
gets originated and stored (as appeared in DFD) is not explicit in Structure Chart. Hence DFD
may still be needed along with Structure Chart to understand the data flow while creating low-
level design.
Let us consider an illustration of structured chart. The following are the major processes in bank:
P4: DD Procedure
P5: MT Procedure
Since all are major subsystems with its own major processing, we first do transaction analysis on
them
Some characteristics of the structure chart as a whole would give some clues about the quality of the
system. Page-Jones (1988) suggest following guidelines for a good decomposition of structure chart:
Avoid decision splits - Keep span-of-effect within scope-of-control: i.e. A module can affect only
those modules which comes under it’s control (All sub-ordinates, immediate ones and modules
reporting to them, etc.)
Error should be reported from the module that both detects an error and knows what the error is.
Restrict fan-out (number of subordinates to a module) of a module to seven. Increase fan-in
(number of immediate bosses for a module). High fan-ins (in a functional way) improves
reusability.
RESULT: This experiment introduces the concept of DFD and structured charts
57