0% found this document useful (0 votes)
41 views57 pages

S.E. Lab File

This lab manual for Software Engineering (BTCS-503-18) outlines the course objectives, outcomes, and a list of experiments for 5th semester B.Tech students in Computer Science and Engineering. It includes practical applications of software engineering principles, such as project planning, software requirement specifications, testing techniques, and the use of tools like SmartDraw and OpenProj. The manual emphasizes the importance of software engineering in various professional fields and aims to equip students with essential skills for their future careers.

Uploaded by

harshitashwani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views57 pages

S.E. Lab File

This lab manual for Software Engineering (BTCS-503-18) outlines the course objectives, outcomes, and a list of experiments for 5th semester B.Tech students in Computer Science and Engineering. It includes practical applications of software engineering principles, such as project planning, software requirement specifications, testing techniques, and the use of tools like SmartDraw and OpenProj. The manual emphasizes the importance of software engineering in various professional fields and aims to equip students with essential skills for their future careers.

Uploaded by

harshitashwani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Department of Computer Science

& Engineering

LAB MANUAL

SUBJECT: Software Engineering (BTCS-503-18)

5th Semester (B. Tech)


(Branch: CSE)

Chandigarh Group of Colleges

College of Engineering Landran, Mohali-140307


SUBJECT: Software Engineering (BTCS-503-18)
Index
Sr. Contents Page
No. Number

1. Course Objectives and Outcomes 1-2

2. List of Experiments 3

3. Study and usage of openProj or similar software to Draft a Project Plan. 6-7

4. Study and usage of openProj or similar software to track the progress of 8-9
a Project.

5. Preparation of Software Requirement Specification Document, Design 10-22


Documents and Testing Phase related Documents for Some Problem.

6. Preparation of Software Configuration Management and Risk 23-26


Management related documents.

7. Study and Usage of any design phase CASE tool. 27-32

8. To Perform Unit testing and integration testing. 33-37

9. To perform Various white box and black box testing techniques. 38-43

10. Testing of a web site. 44-47

Experiments beyond PTU Syllabus


11. Creating modules in a structured chart. 49-50

12. Representing interrelationship in modules in structured chart. 51-52

13. Converting DFD into structured chart. 53-57


COURSE OBJECTIVE

Computers are used in almost every aspect of our lives today. The computer industry is one of the fastest
growing segments of our economy and that growth promises to continue in future also. An engineering
degree in computer science and engineering (CSE) equips with strong theoretical as well as practical
knowledge of computer hardware and software.

Computer engineering is projected to be one of the fastest growing professions over the next decade. Strong
employment growth combined with a limited supply of qualified persons foresees remarkable employment
prospects in this field. Computer engineers have excellent career prospects in different areas like research
and development, information security, software development and programming, project management,
database management, system and network administration, data protection, consulting, software and
Hardware Management, Sales user support, marketing, market research, controlling and information
research, etc
COURSE OUTCOMES
 To develop an understanding about the basics of Software Engineering.

 To provide the information about the various software paradigms.

 To understand the practical use software engineering design techniques.

 To develop the understanding of various methodologies of software development.

 Will able to develop software by using various tools like smart draw.

 Know the concepts of software engineering.


LIST OF EXPERIMENTS

1. Study and usage of openProj or similar software to Draft a Project Plan.

2. Study and usage of openProj or similar software to track the progress of a Project.

3. Preparation of Software Requirement Specification Document, Design Documents and Testing


Phase related Documents for Some Problem.

4. Preparation of Software Configuration Management and Risk Management related documents.

5. Study and Usage of any design phase CASE tool.

6. To Perform Unit testing and integration testing.

7. To perform Various white box and black box testing techniques.

8. Testing of a web site.

Experiments beyond PTU Syllabus:


9. Creating modules in a structured chart.

10. Representing interrelationship in modules in structured chart.

11. Converting DFD into structured chart.


Experiment No.1

Title: - Study and usage of openProj or similar software to Draft a Project Plan.

Objective: - To get familiar open-source project management software system.

S/W Requirement: - Smart Draw

H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron

• Main Memory - 128 MB RAM

• Hard Disk – minimum 20 GB IDE Hard Disk

• Removable Drives–1.44 MB Floppy Disk Drive

–52X IDE CD-ROM Drive

•PS/2 HCL Keyboard and Mouse

METHOD :-
SmartDraw is a diagram tool used to make flowcharts, organization charts, mind maps, project charts, and
other business visuals. SmartDraw has two versions: an online edition and a downloadable edition for
Windows desktop.

SmartDraw offers a full suite of diagrams, templates, tools and symbols to design anything. Use SmartDraw
to make flowcharts, residential and commercial floor plans, organizational charts, CAD and engineering
diagrams, electrical designs, landscape designs, network diagrams, app and site mockups, wireframes, and
more. SmartDraw automates much of the drawing process, users can add, move, and delete shapes, and the
app is smart enough to realign and fix drawings as they are being worked on. Businesses can create a custom
org chart using a pre-built template, or by drawing it from scratch, and utilize tools such as intelligent
formatting, Visio file import, powerful automation, visual simbols, simple commands, and other intuitive
tools.

Features:

#Intelligent Formatting:- SmartDraw is the only diagramming app with an intelligent formatting engine.
Add, delete or move shapes and your diagram will automatically adjust and maintain its arrangement.

#Professional Results:- SmartDraw's intelligent formatting and designer templates combine to


automatically create professional quality diagrams every time.

#Integrate with tools you use:- SmartDraw integrates easily with tools you already use. With just a click,
you can send your diagram directly to Microsoft Word®, Excel®, PowerPoint®, or Outlook®. Export to PDF
and other graphic formats. SmartDraw also has apps for G Suite and the Atlassian® stack: Confluence, Jira,
and Trello. You can also connect to AWS to generate a diagram. See all of SmartDraw's integrations.
#Engineering Power:- SmartDraw allows you to draw and print architectural and engineering diagrams to
scale. SmartDraw even provides an AutoCAD-like annotation layer that automatically resizes to match a
diagram. Most diagramming apps don't do this at all.

#Collaborate Anywhere:- SmartDraw is the only diagramming tool that runs in a web browser on any
platform (Mac, PC, or mobile device) that you can also install behind a firewall on a Windows desktop and
move seamlessly between them.

You and your team can work on the same diagram using SmartDraw. You can also share files with non
SmartDraw users. SmartDraw also works seamlessly with popular file sharing apps like Dropbox®, Box®,
Google Drive™ or OneDrive®.

#Pros of SmartDraw:
SmartDraw is a very complete plication for all types of diagrams that you need, has great virtues with
respect to Visio that can be considered the number one in the market or at least used, when you need to
diagram a project from the basic algorithm to the more complex gantt diagrams , this tool is very useful, it
makes you much easier in work (in my experience much easier to use than Visio) the connection between
diagrams is simpler, since finishing a diagram makes it much easier to do the following, has a wide range of
options and templates that help you in the creation process, contributing to the development of diagrams for
systems that you are going to develop, a very useful tool for projects where you need to correctly capture
what you want to develop, use it, try it for yourself.

#Cons of SmartDraw:

Its main against is the price, the full version is very expensive although it allows you to enjoy the desktop
version and its cost is very high, I think you should download it to compete more in the market. The speed
of the online version sometimes fluctuates negatively, it is a mistake that must be corrected

RESULT:
This experiment introduced the basic concepts of smart draw components through which project planning
can be done and its various notations are used to plan a project.
Experiment No.2

Title:- Study and usage of Openproj to track the progress of a project.

Objective :- To get familiar with the basic smart draw components for project management.

S/W Requirement :- Smart Draw

H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron

• Main Memory - 128 MB RAM

• Hard Disk – minimum 20 GB IDE Hard Disk

• Removable Drives–1.44 MB Floppy Disk Drive

–52X IDE CD-ROM Drive

•PS/2 HCL Keyboard and Mouse

Method:-
Finding the right project management solution for your team can be very hard. Finding an open source
project management solution may be even harder. That's the mission of solution that allows teams to
collaborate throughout the project life cycle. Additionally, the project aims to replace proprietary software
like Microsoft Project Server or Jira.

The Smart Draw objectives:

1. Establish and promote an active and open community of developers, users, and companies for continuous
development of the open source project.

2. Define and develop the project vision, the code of conduct, and principles of the application.

3. Create development policies and ensure their compliance.

4. Define and evolve the development and quality assurance processes.

5. Provide the source code to the public.

6. Provide and operate the OpenProject platform.

Mission of Smart Draw:

The mission of OpenProject can be quickly summarized: we want to build excellent open source project
collaboration software. And when I say open source, I meant it. We strive to make OpenProject a place to
participate, collaborate, and get involved—with an active, open- minded, transparent, and innovative
community. Companies have finally become aware of the importance of project management software and
also the big advantages of open source. But why is it that project teams still tend to switch to old-fashioned
ways of creating project plans, task lists, or status reports with Excel, PowerPoint, or Word—or having other
expensive proprietary project management software in use? We want to offer a real open source alternative
for companies: free, secure, and easy to use.

Progress of the project is as below:-

RESULT:

This experiment introduced the concept of project monitoring using Gantt Charts
Experiment No.3

Title:- Preparation of Software Requirement Specification Document, Design Documents and


Testing Phase related documents for some problems

Objective :- To study the various phases of Software Development Life Cycle.

S/W Requirement :- Smart Draw

H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron

• Main Memory - 128 MB RAM

• Hard Disk – minimum 20 GB IDE Hard Disk

• Removable Drives–1.44 MB Floppy Disk Drive

–52X IDE CD-ROM Drive

•PS/2 HCL Keyboard and Mouse

Method:-
SRS: -

An SRS minimizes the time and effort required by developers to achieve desired goals and also
minimizes the development cost. A good SRS defines how an application will interact with
system hardware, other programs and human users in a wide variety of real- world situations.
Parameters such as operating speed, response time, availability, portability, maintainability,
footprint, security and speed of recovery from adverse events are evaluated. Methods of
defining an SRS are described by the IEEE (Institute of Electrical and Electronics Engineers)
specification 830-1998.

Qualities of SRS: -
 Correct
 Unambiguous
 Complete
 Consistent
 Ranked for importance and/or stability
 Verifiable
 Modifiable
 Traceable

The below diagram depicts the various types of requirements that are captured during SRS.
Figure 3.1:-SRS requirements

ABSTRACT
"Blog" is an abbreviated version of "weblog," which is a term used to describe websites that
maintain an ongoing chronicle of information. A blog features diary-type commentary and links
to articles on other websites, usually presented as a list of entries in reverse chronological order.

Fig – Blog Concept

What does Blog mean?


A frequent, chronological publication of personal thoughts and Web links. Blogs, or weblogs,
started out as a mix of what was happening in a person’s life and what was happening on the
Web, a kind of hybrid diary/news site.
Blogging Tools

These are the basic blogging tools we use and at marketingterms.com


Domain Name – Namecheap
WordPress Hosting – WP Engine
(Optional) Page Builder – Beaver Builder (Optional) Page
Builder Addons – Ultimate Addons Blog versus Website

Many people are still confused over what constitutes a blog over a website. Part of the problem
is that many businesses use both, integrating them into a single web presence. But there are two
features of a blog that set it apart from a traditional website.

1. Blogs are updated frequently. Whether it's a mommy blog in which a woman shares adventures in
parenting, a food blog sharing new recipes, or a business providing updates to its services, blogs have
new content added several times a week.
2. Blogs allow for reader engagement. Blogs are often included in social media because of the ability for
readers to comment and have a discussion with the blogger and others who read the blog makes it
social.
Why Blogging Is So Popular?

There are several reasons why entrepreneurs have turned to blogging.

1. Search engines love new content, and as a result, blogging is a great search engine optimization
(SEO) tool.
2. Blogging provides an easy way to keep your customers and clients up-to-date on what's going on, let
them know about new deals, and provide tips. The more a customer comes to your blog, the more
likely they are to spend money.
3. A blog allows you to build trust and rapport with your prospects. Not only can you show off what you
know, building your expertise and credibility, but because people can post comments and interact
with you, they can get to know you, and hopefully, will trust you enough to buy from you.
4. Blogs can make money. Along with your product or service, blogs can generate income from
other

Project Introduction
Project is based on transaction and its management. The objectives of project are to maintain
transaction management and concurrency control .Basically project is based on real world
problem which is related to banking and transaction .It also provide security features related to
database like only authenticated accountant or user can access the database or perform the
transaction .
It is based on banking so it is related to accountant and customers which are the naïve users. In
this there are two types of GUI for different users and they provide different external views. It
holds the feature of database sharing. In this if two different users can perform concurrently if
they are authorized users and have the permissions to access the database. Basically it is based
on database sharing and transaction management of concurrency control.
Firstly accountant end work as the admin in this project. To add new user there any accountant
who add that account and person detail in the database and then there is key generate to access
the database with the password for the naïve users. With the help of that key and password user
can access the details and can perform the transaction on database with very user friendly
GUI.
On other end accountant has some more facilities like to update the details of the users. To update
any detail of the user then accountant is the person who can perform this task not the
naïve user. Accountant the person who can close the account of any customer who want to
close his or her account. After closing the detail of account is remain in database for some period of
time. With the trigger action after particular time period the data is automatically deleted
permanently.

Figure: 1 Prototype Model


Objectives of the project
Project is based on transaction and its management. The objectives of project are to
maintain transaction management and concurrency control. Basically project is
based on real world problem which is related to banking and transaction. It also
provide security features related to database like only authenticated accountant .
NETBANK objectives

It ensures to provide transaction management and concurrency control.


It ensures to prevent the database problems like lost update, dirty read and unrepeatable read etc.
It provides the concurrency control features.
This help to provide the security features like authentication, authorization
etc. It ensures to provide different type of views at external level.
This project provides two different sides for two different types of users in the bank like accountants
and naïve users.
It provides very user-friendly GUI for different type of users.
It provides the database sharing concept by using some networking concepts.
It ensures that the sharing of the database only between the authorized users.

Overall Description

1.) Product Perspective:


User interface: The application that will be developing will have a user friendly and Menu
based interface.
2.) Hardware Interface:

Processor : Dual Core or Higher


RAM : 512 MB or higher
Other Peripheral Devices : CD-Drive, QWERTY Layout Keyboard

3.) Software Interface:


Operating system: Window XP, Vista,7,8,8.1 and higher
Platform: Java
Database: SQL server
Language: java

4.) Communication Interface:


The communication between the different parts of the system is important they depend on each
other.

5.) Memory Constraints:


At least 512MB RAM and 4GB of the Hard disk space will be required for running the
application.

6.) Site Adaptation Requirements:


The centralized database is used so that the system can communicate
to retrieve the information.

7.) Constraints:
 There is a backup for system.

The problem was analyzed and then design was prepared. Then we implemented this design through coding and
then testing is done. If any errors were found then we have tried our best to remove them and then again testing
was done so that we can remove all the errors of our project. This project will be maintained and upgraded time
to time so that we can provide proper and latest notes to all the users of this tutorial.
Figure: 2 (SDLC Cycle)

• Stages of Waterfall Model: -


The SDLC is a process used by a systems analyst to develop an information system, including
requirements, validation, training, and user (stakeholder) ownership. Any SDLC should result in a
high-quality system that meets or exceeds customer expectations, reaches completion within time
and cost estimates, works effectively and efficiently in the current and planned Technology
infrastructure, and is inexpensive to maintain and cost-effective to enhance. SDLC is a
methodology used to describe the process for building information systems.

REQUIREMENTS SPECIFICATION

Prior to the software development efforts in any type of system it is very essential to
understand the requirements of the system and users. A complete specification of the software
is the 1st step in the analysis of system. Requirements analysis provides the designer with the
representation of function and procedures that can be translated into data, architecture and
procedural design. The goal of requirement analysis is to find out how current system is
working and of there are any areas where improvement is necessary and possible.

INTERFACE REQUIREMENTS: -

1.) User Interface The packet must be user friendly and robust. It must prompt the user with
proper message boxes to help them perform various actions and how to precede further the
system must respond normally under any input conditions and display proper message
instead of turning up faults and errors
2.) Software Specification Software is a set of program, documents, and procedure, routines
associated with computer system. Software is an essential complement to hardware. It is
the computer programs, when executed operates the hardware.

SYSTEM DESIGN: -

System design is the process of developing specifications for a candidate system that meet the criteria
established in the system analysis. Major step in system design is the preparation of the input forms and
the output reports in a form applicable to the user. The main objective of the system design is to make the
system user friendly.

System design involves various stages as:


• Entry
• Data Correction
• Data Deletion
• Data Processing
• Sorting and Indexing
• Report Generatiosn
System design is the creative act of invention, developing new inputs, a database, offline files,
procedures and output for processing business to meet an organization objective. System design
builds information gathered during the system analysis.

DATABASE DESIGN: -

The overall objective in the development of the database technology has been to treat data as
an organizational resource and as an integrated whole. Database management system allows
data to be protected and organize separately from other resources. Database is an integrated
collection of data. The most significant of data as seen by the programs and data as stored on
the direct storage access storage devices. This is the difference between logical and physical
data.
The organization of data in the database aims to achieve three major objectives:
• Data Integration
• Data Integrity
• Data Independence
Methodology

The spiral model is similar to the incremental model, with more emphasis placed on risk
analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and
Evaluation. A software project repeatedly passes through these phases in iterations (called
Spirals in this model). The baseline spiral, starting in the planning phase, requirements is
gathered and risk is assessed. Each subsequent spiral build on the baseline spiral.
Phases of Spiral Model: -
§ Planning Phase: Requirements are gathered during the planning phase. Requirements like
‘BRS’ that is ‘Business Requirement Specifications’ and ‘SRS’ that is ‘System Requirement
specifications.

§ Risk Analysis: In the risk analysis phase, a process is undertaken to identify risk and
alternate solutions. A prototype is produced at the end of the risk analysis phase. If any risk
is found during the risk analysis, then alternate solutions are suggested and implemented.

§ Engineering Phase: In this phase software is developed, along with testing at the end of the
phase. Hence in this phase the development and testing are done.

§ Evaluation phase: This phase allows the customer to evaluate the output of the project to date
before the project continues to the next spiral.

Architecture of Spiral model

Advantages of Spiral model

• High amount of risk analysis hence, avoidance of Risk is enhanced


• Good for large and mission-critical projects.

• Strong approval and documentation control. Additional


Functionality can be added at a later date. Software is produced
early in the software life cycle.

Disadvantages of Spiral model:


Can be a costly model to use.
Risk analysis requires highly specific expertise.
Project’s success is highly dependent on the risk analysis phase.
Doesn’t work well for smaller projects.

When to use Spiral model:


When costs and risk evaluation is important
For medium to high-risk projects
Long-term project commitment unwise because of potential changes to economic priorities
Users are unsure of their needs
Requirements are complex
New product line

FEASIBILITY ANALYSIS

A feasibility study is an analysis of how successfully a project can be completed, accounting for factors that
affect it such as economic, technological, legal and scheduling factors. Project managers use feasibility
studies to determine potential positive and negative outcomes of a project before investing a considerable
amount of time and money into it

TYPES OF FEASIBILITY STUDY

TECHNICAL: -
Fundamentally, we are trying to answer the question “Can it actually be built?” To do this we
investigated the technologies to be used on the project. For each technology alternative that we
accessed we identified the advantages and disadvantages of it. By studying available resources
and requirements, we concluded that minimum the application should be made user-friendly.

ECONOMICAL: -
Keeping all these needs and demands of system in minimum budget we developed new software
which will not only lower their budget but also not require much cost to accept it. Keeping all
these needs and demands of system in minimum budget we developed new software which will
not only lower their budget but also not require much cost to accept it.
OPERATIONAL: -
The basic question that you are trying to answer is “Is it possible to maintain and support this
application once it is in production?” we developed a system which does not require any extra
technical skill and training. It is developed using such environment, which are quite familiar to
most of the people concerned with the system. The new system will prove easy to operate
because it is developed in such a way so that it will prove user friendly. User will find it quite
familiar and easy to operate.

BEHAVIOURAL: -
o Feasibility in terms of behavior of its employees.
o It reflects the behavior of the employees of an organization.
o Main focus is on teamwork and harmony among employees with no room for
discrimination and hatred.

Benefits of Conducting Feasibility Study: -


The importance of a feasibility study is based on organizational desire to “get it right”
before committing resources, time, or budget. A feasibility study might uncover new ideas
that could completely change a project’s scope. It’s best to make these determinations in
advance, rather than to jump in and learning that the project just won’t work. Conducting
a feasibility study is always beneficial to the project as it gives you and other stakeholders a
clear picture of the proposed project.
Below are some key benefits of conducting a feasibility study:
Improves project teams’ focus.
• Identifies new opportunities.
• Provides valuable information for a “go/no-go” decision.
• Narrows the business alternatives.
• Identifies a valid reason to undertake the project.
• Enhances the success rate by evaluating multiple parameters.
• Aids decision-making on the project.
• Identifies reasons not to proceed.

Features of the Proposed System: -

In earlier time, the Blogger Concept was using the manual system, which has based on the
entries on the registers. The computerized integrated system from the existing system will have
the following advantage:
Handle volume of information.
Complexity of data
processing. Processing time
constant.
Computational demand.
Instantaneous queries.
Security features.

Schedule Of documents: -

Sr. No. Document Date


1. Design document 01/09/22
2. Coding document 03/09/22
3. Testing document 09/09/22
4. Risk Handling 13/09/22
Document
Design Documents: -

A software design document (SDD) is a written description of a software product, that a


software designer writes in order to give a software development team overall guidance to the
architecture of the software project. An SDD usually accompanies an architecture diagram with
pointers to detailed feature specifications of smaller pieces of the design. Practically, a design
document is required to coordinate a large team under a single vision.
A design document needs to be a stable reference, outlining all parts of the software and how
they will work. The document is commanded to give a fairly complete description, while
maintaining a high-level view of the software.
There are two kinds of design documents called HLDD (high-level design document) and LLDD
(low-level design document).
It includes description of the body and soul of the entire project, with all the details, and the
method by which each element will be implemented. It ensures that what is produced is what
you want to produce.

While preparing Design document: -


Describe not just the body, but the soul.
Make it readable
Prioritize
Get into the details
Some things must be
demonstrated Not just "what" but
"how." Provide alternatives
Coding: - Good software development organizations normally require their programmers to adhere
to some well-defined and standard style of coding called coding standards. Most software

development organizations formulate their own coding standards that suit them most, and require
their engineers to follow these standards rigorously. The purpose of requiring all engineers of an
organization to adhere to a standard style of coding is the following:
• A coding standard gives a uniform appearance to the codes written by different engineers.
• It enhances code understanding.
• It encourages good programming practices. A coding standard lists several rules to be followed
during coding, such as the way variables are to be

Important facts: -

Version control application required in this phase.


Before begin the actual coding, you should spend some time on selecting development tool, which
will be suitable for your debugging, coding, modification and designing needs.
Before actual writing code, some standard should be defined, as multiple developers going to use the
same file for coding.
During development developer should write appropriate comments so that other developers will come
to know the logic behind the code.
Last but most important point. There should be a regular review meeting need to conduct in this stage.
It helps to identify the prospective defects in an early stage. Helps to improve product and coding
quality.

Coding standards and guidelines: - Good software development organizations usually develop their own
coding standards and guidelines depending on what best suits their organization and the type of products they
develop. The following are some representative coding standard
Rules for limiting the use of global: - These rules list what types of data can be declared global and
what cannot.

Contents of the headers preceding codes for different modules: - The information contained in the
headers of different modules should be standard for an organization. The exact format in which the
header information is organized in the header can also be specified. The following are some standard
header data:

• Name of the module.


• Date on which the module was created.
• Author’s name.
• Modification history.
• Synopsis of the module.
• Different functions supported, along with their input/output parameters.
• Global variables accessed/modified by the module.
Naming conventions for global variables, local variables, and constant identifiers: - A possible
naming convention can be that global variable names always start with a capital letter, local variable
names are made of small letters, and constant names are always capital letters.

The code should be well-documented: - As a rule of thumb, there must be at least one comment line
on the average for every three-source line.

The length of any function should not exceed 10 source lines: - A function that is very lengthy is
usually very difficult to understand as it probably carries out many different functions. For the same
reason, lengthy functions are likely to have disproportionately larger number of bugs.

Do not use an identifier for multiple purposes: - Programmers often use the same identifier to
denote several temporary entities. For example, programmers use a temporary loop variable
for computing and a storing the final result. The rationale that is usually given by these
programmers for such multiple uses of variables is memory efficiency, e.g., three variables use
up three memory locations, whereas the same variable used in three different ways uses just one
memory location. However, there are several things wrong with this approach and hence should
be avoided. Some of the problems caused by use of variables for multiple purposes as follows:
• Each variable should be given a descriptive name indicating its purpose. This is not possible if an
identifier is used for multiple purposes. Use of a variable for multiple purposes can lead to confusion
and make it difficult for somebody trying to read and understand the code.
Use of variables for multiple purposes usually makes future enhancements more difficult
Do not use go-to statements: - Use of go to statements makes a program unstructured and makes it
very difficult to understand.

Code review: - Code review for a model is carried out after the module is successfully compiled
and the all the syntax errors have been eliminated. Code reviews are extremely cost-effective
strategies for reduction in coding errors and to produce high quality code. Normally, two types
of reviews are carried out on the code of a module. These two types code review techniques are
code inspection and code walk through.
Code Walk Throughs: - Code walk through is an informal code analysis technique. In this
technique, after a module has been coded, successfully compiled and all syntax errors
eliminated. A few members of the development team are given the code few days before the
walk-through meeting to read and understand code. Each member selects some test cases and
simulates execution of the code by hand (i.e., trace execution through each statement and
function execution). The main objectives of the walk through are to discover the algorithmic
and logical errors in the code. The members note down their findings to discuss these in a walk-
through meeting where the coder of the module is present.
Even though a code walks through is an informal analysis technique, several guidelines have
evolved over the years for making this naïve but useful analysis technique more effective. Of
course, these guidelines are based on personal experience, common sense, and several subjective
factors. Therefore, these guidelines should be considered as examples rather than accepted as
rules to be applied dogmatically. Some of these guidelines are the following.
• The team performing code walk through should not be either too big or too small. Ideally, it should
consist of between three to seven members.
• Discussion should focus on discovery of errors and not on how to fix the discovered errors.
• In order to foster cooperation and to avoid the feeling among engineers that they are being
evaluated in the code walk through meeting, managers should not attend the walk-through meetings.
Code Inspection: - In contrast to code walk through, the aim of code inspection is to discover some
common types of errors caused due to oversight and improper programming. In other words, during
code inspection the code is examined for the presence of certain kinds of errors, in contrast to the
hand simulation of code execution done in code walk throughs. For instance, consider the classical
error of writing a procedure that modifies a formal parameter while the calling routine calls that
procedure with a constant actual parameter. It is more likely that such an
error will be discovered by looking for these kinds of mistakes in the code, rather than by simply
hand simulating execution of the procedure. In
addition to the commonly made errors, adherence to coding standards is also checked during
code inspection. Good software development companies collect statistics regarding different
types of errors commonly committed by their engineers and identify the type of errors most
frequently committed. Such a list of commonly committed errors can be used during code
inspection to look out for possible errors.

Following is a list of some classical programming errors which can be checked during code
inspection:
Use of uninitialized variables.
Jumps into loops.
Non terminating loops.

Incompatible assignments.
Array indices out of bounds.
Improper storage allocation and deallocation.
Mismatches between actual and formal parameter in procedure calls
Use of incorrect logical operators or incorrect precedence among operators.
Improper modification of loop variables.
Comparison of equally of floating-point variables, etc.

RESULT: This experiment introduces the concept of understanding SRS, SDD and Testing
Document.
Experiment No.4

Title:- Preparation of the Software Configuration management and Risk management related documents.

Objective :- To study SCM AND

S/W Requirement :- Smart Draw

H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron

• Main Memory - 128 MB RAM

• Hard Disk – minimum 20 GB IDE Hard Disk

• Removable Drives–1.44 MB Floppy Disk Drive

–52X IDE CD-ROM Drive

•PS/2 HCL Keyboard and Mouse

Method:-
Software Configuration management

SCM or Software Configuration management is a Project Function (as defined in the SPMP)
with the goal to make technical and managerial activities more effective. Software configuration
management (SCM) is a software engineering discipline consisting of standard processes and
techniques often used by organizations to manage the changes introduced to its software products.
SCM helps in identifying individual elements and configurations, tracking changes, and version
selection, control, and baselining.

SCM is also known as software control management. SCM aims to control changes introduced to
large complex software systems through reliable version selection and version control.

The SCM system has the following advantages:


Reduced redundant work.
Effective management of simultaneous updates.
Avoids configuration-related problems.
Facilitates team coordination.
Helps in building management; managing tools used in builds.
Defect tracking: It ensures that every defect has traceability back to its source.
Benefits of Software Configuration Management
SCM provides significant benefits to all projects regardless of size, scope, and complexity. Some
of the most common benefits experienced by project teams applying the SCM disciplines
described in this guide are possible because the SCM system:
Organization
Being that Configuration Management is like the framework for greater information
management programs, it should go without saying that it is critical for greater
management and organization of information as a whole. With a well-ordered system in
place, a good IT worker should be able to see all of the past system implementations of the
business, and can better address future needs and changes to keep the system up to date
and running smoothly.
Reliability
Nothing is worse than an unreliable system that is constantly down and needing repairs
because a company’s configuration management team is lacking in organization and pro-
activeness. If the system is used correctly, it should run like the well-oiled machine that it
is, ensuring that every department in the company can get their jobs done properly.
Increased stability and efficiency of the system will trickle down into every division of a
company, including customer relations, as the ease and speed in which their problems can
be solved and information can be accessed will surely make a positive impact.
Cost Reduction and Risks
Like anything else in business, a lack of maintenance and attention to details can have
greater risks and cost down the line, as opposed to proactive action before a problem
arises. Configuration Management saves money with the constant system maintenance,
record keeping, and checks and balances to prevent repetition and mistakes. The
organized record keeping of the system itself saves time for the IT department and reduces
wasted money for the company with less money being spent on fixing recurring or
nonsensical issues.
SCM Process
The software configuration management process defines a series of tasks that have four primary
objectives:

1. To identify all the items that collectively defines the software configuration.
2. To manage changes to one or more of these items.
3. To facilitate the construction of different versions of an application.
4. To ensure that software quality is maintained as the configuration evolves over time.

Figure 4.1: - SCM process


Risk Management

A risk is any anticipated unfavorable event or circumstance that can occur while project is
being developed
1. The project manager needs to identify different type of risk in advance so that the
project deadlines don’t get extended
2. There are three main activities of risk management

Risk identification
1. The project manager needs to anticipate the risk in project as early as possible so
that the impact of risk can be minimized by using defective risk management
plans
2. Following are the main types of risk that need to be identified
3. Project risk: - these include
 Resource related issues
 Schedule problems
 Budgetary issues
 Staffing problem
 Customer related issues
4. Technical risk:= includes
 Potential design problems
 Implementation and interfacing issues
 Incomplete specification.
 Changing specification and technical uncertainty
 Ambiguous specification
 Testing and maintenance problem
5. Business risk: -
 Market trend changes
 Developing a product similar to the existing applications
 Personal commitments
3. In order to be able to successfully identify and foresee the different type of risk that
might affect a project it is good idea to have a company disaster list
4. The company disaster list contains all the possible risk or events that can occur in
similar projects

Risk assessment: -
1. The main objective of risk assessment is to rank the risk in terms of their damage causing
potential
2. The priority of each list can be computed using the equation p=r*s, where p is priority
with which the risk must be handled, r is probability of the risk becoming true and s is
severity of damage caused due to the risk becoming true
3. If all the identified risk are prioritized than most likely and damaging risk can be handled
first and others later on

Risk containment: -
1. Risk containment include planning the strategies to handle and face the most likely
and damaging risk first
2. Following are the strategies that can be used in general
a. Avoid the risk: - e.g.:- in case of having issues in designing phase with reference to
specified requirements, one can discuss withe customer to change the
specifications and avoid therisk
b. Transfer the risk: -
i. This includes purchasing and insurance coverage
ii. Getting the risky component developed by Third party
Risk reduction: - leverage factor:

a) The project manager must consider the cost of handling the risk and the
corresponding reduction of the risk
b) Risk leverage = (risk exposure before reduction - risk exposure after reduction) /
cost of reduction

RESULT:

This experiment introduces the concept of Risk Management.


Experiment No. 5

Title:- Study and usage of any Design phase CASE tool

Objective :-To get familiarize with Computer Aided Tools and techniques .

S/W Requirement :- Smart Draw

H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron

• Main Memory - 128 MB RAM

• Hard Disk – minimum 20 GB IDE Hard Disk

• Removable Drives–1.44 MB Floppy Disk Drive

–52X IDE CD-ROM Drive

•PS/2 HCL Keyboard and Mouse

Method:-
CASE Tools: -

CASE stands for Computer Aided Software Engineering. It means, development and
maintenance of software projects with help of various automated software tools. CASE tools
are set of software application programs, which are used to automate the SDLC activities.
CASE tools are used by software project managers, analysts and engineers to develop
software system.

Reasons for using case tools:


• The primary reasons for using a CASE tool are:
– To increase productivity
– To help produce better quality software at lower cost
– To decrease the development time and cost.
• Various tools are incorporated in CASE and are called CASE tools, which are used
to support different stages and milestones in a software development lifecycle.

Architecture of CASE tools: -


Figure: - 5.1

• Layer 1 is the user interface whose function is to help the user to interact with core of
the system. It provides a graphical user interface to users using which
interaction with the system become easy.
• Layer 2 depicts tool management system (TMS) which constitutes multiple tools of
different category using which automation of the development process can be done.
TMS may include some tools to draw diagrams or to generate test cases.
• Layer 3 represents object management system (OMS) which represents the set of objects
generated by the users. Group of design notations, set of test cases (test suite) are treated
as the objects.
• Layer 4 represents a repository which stores the objects developed by the user. Layer 4
is nothing but a database which stores automation files.

Components of CASE Tools: - CASE tools can be broadly divided into the following parts
based on their use at a particular SDLC stage:

 Central Repository - CASE tools require a central repository, which can serve as a
source of common, integrated and consistent information. Central repository is a central
place of storage where product specifications, requirement documents, related reports
and diagrams, other useful information regarding management are stored. Central
repository also serves as data dictionary.
 Upper Case Tools - Upper CASE tools are used in planning, analysis and design
stages of SDLC.
 Lower Case Tools - Lower CASE tools are used in the implementation, testing
and maintenance stages.
 Integrated Case Tools - Integrated CASE tools are helpful in all the stages of
SDLC, from Requirement gathering to Testing and documentation.
Figure: - 5.2
Types of CASE tools: - Major categories of CASE tools are:
– Diagram tools
– Project Management tools
– Documentation tools
– Web Development tools
– Quality Assurance tools
– Maintenance tools

Benefits of CASE tools


1. Project Management and control is improved: CASE tools can aid the project
management and control aspects of a development environment. Some CASE tools allow
for integration with industry-standard project management methods (such as PRINCE).
Others incorporate project management tools such as PERT charts and critical path
analysis. By its very nature, a CASE tool provides the vehicle for managing more
effectively the development activities of a project.
2. System Quality is improved: CASE tools promote standards within a development
environment. The use of graphical tools to specify the requirements of a system can also
help remove the ambiguities that often lead to poorly defined systems. Therefore, if used
correctly, a CASE tool can help improve the quality of the specification, the subsequent
design and the eventual working system.
3. Consistency checking is automated: Large amounts of information about a business area
and its requirement are gathered during the analysis phase of an information systems
development project. Using a manual system to record and cross reference this
information is both time-consuming and inefficient. One of the advantages of using CASE
tool is that all data definitions and other relevant information can be stored in a central
repository that can then be used to cross check the consistency of the different views
being modelled.
4. Productivity is increased: One of the most obvious benefits of a CASE tool is that it may
increase the productivity of the analysis team. If used properly, the CASE tool will
provide a support environment enabling analysts to share information and resources,
manage the project effectively and produce supporting documentation quickly.
5. The maintenance effort is better supported: It has been argued that CASE tools help
reduce the maintenance effort required to support the system once it is operational. CASE
tools can be used to provide comprehensive and up-to-date documentation – this is
obviously a critical requirement for any maintenance effort. CASE tools should result in
better systems being developed in the first place.

Data Flow Diagram: A Data Flow Diagram (DFD) is a traditional visual representation of the
information flows within a system. A neat and clear DFD can depict the right amount of the system
requirement graphically. It can be manual, automated, or a combination of both. It shows how data
enters and leaves the system, what changes the information, and where data is stored.

The objective of a DFD is to show the scope and boundaries of a system as a whole. It may be used
as a communication tool between a system analyst and any person who plays a part in the order that
acts as a starting point for redesigning a system. The DFD is also called as a data flow graph or
bubble chart.
Types of DFD:
Data Flow Diagrams are either Logical or Physical.

 Logical DFD - This type of DFD concentrates on the system process, and flow of data in the
system. For example, in a Banking software system, how data is moved between different
entities.
 Physical DFD - This type of DFD shows how the data flow is actually implemented in the
system. It is more specific and closer to the implementation.
The importance and need of different levels of DFD in software design: - The main
reason why the DFD technique is so important and so popular is probably because of
the fact that DFD is a very simple formalism – it is simple to understand and use.
Starting with a set of high-level functions that a system performs, a DFD model
hierarchically represents various sub-functions. In fact, any hierarchical model is
simple to understand. Human mind is such that it can easily understand any
hierarchical model of a system – because in a hierarchical model, starting with a very
simple and abstract model of a system, different details of the system are slowly
introduced through different hierarchies. The data flow diagramming technique also
follows a very simple set of intuitive concepts and rules. DFD is an elegant modeling
technique that turns out to be useful not only to represent the results of structured
analysis of a software problem, but also for several other applications such as showing
the flow of documents or items in an organization.

Disadvantages of DFDs: -
Modification to a data layout in DFDs may cause the entire layout to be changed. This is
because the specific changed data will bring different data to units that it accesses.
Therefore, evaluation of the possible of the effect of the modification must be considered
first.
The number of units in a DFD in a large application is high. Therefore, maintenance is
harder, more costly and error prone. This is because the ability to access the data is passed
explicitly from one component to the other. This is why changes are impractical to be
made on DFDs especially in large system.
DFDs are inappropriate to use in a large system because if changes are to be made on a
specific unit, there is a possibility that the whole DFD need to be changed. This is because
the change may result in different data flow into the next unit. Therefore, the whole
application or system may need modification too.
LEVEL 0: -
The first thing we must do is model the main outputs and sources of data in the scenario
above. Then we draw the system box and name the system. Next, we identify the
information that is flowing to the system and from the system.
Level 0 of DFD of Blogger Concept

LEVEL 1: -

The next stage is to create the Level 1 Data Flow Diagram. This highlights the main functions
carried out by the system as follows:

Level 1 of DFD of Blogger Concept


LEVEL 2: -

We now create the Level 2 Data Flow Diagrams. First 'expand' the function boxes 1.1 and
1.2 so that we can fit the process boxes into them. Then position the data flows from Level 1
into the correct process in Level 2 as follows:

Level 2 of DFD of Blogger Concept

RESULT: This experiment introduces the concept of understanding case tool.


Experiment No. 6

Title:- To perform unit testing and integration testing.

Objective :-To get familiarize with different type of testing techniques.

S/W Requirement :- Smart Draw

H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron

• Main Memory - 128 MB RAM

• Hard Disk – minimum 20 GB IDE Hard Disk

• Removable Drives–1.44 MB Floppy Disk Drive

–52X IDE CD-ROM Drive

•PS/2 HCL Keyboard and Mouse

Method:-
Software testing can be stated as the process of verifying and validating that a software or
application is bug free, meets the technical requirements as guided by its design and development
and meets the user requirements effectively and efficiently with handling all the exceptional and
boundary cases.
The process of software testing aims not only at finding faults in the existing software but also at
finding measures to improve the software in terms of efficiency, accuracy and usability. It mainly
aims at measuring specification, functionality and performance of a software program or application.
Types of Software Testing:

1. Manual Testing: Manual testing includes testing a software manually, i.e., without using any
automated tool or any script. In this type, the tester takes over the role of an end-user and tests the
software to identify any unexpected behavior or bug. There are different stages for manual testing
such as unit testing, integration testing, system testing, and user acceptance testing.
Testers use test plans, test cases, or test scenarios to test a software to ensure the completeness of
testing. Manual testing also includes exploratory testing, as testers explore the software to
identify errors in it.
2. Automation Testing: Automation testing, which is also known as Test Automation, is when the
tester writes scripts and uses another software to test the product. This process involves automation
of a manual process. Automation Testing is used to re-run the test scenarios that were performed
manually, quickly, and repeatedly.
Apart from regression testing, automation testing is also used to test the application from load,
performance, and stress point of view. It increases the test coverage, improves accuracy, and
saves time and money in comparison to manual testing.
Hierarchy of Testing Levels:
Unit Testing: - Unit testing, a testing technique using which individual modules are
tested to determine if there are any issues by the developer himself. It is concerned with
functional correctness of the standalone modules. The main aim is to isolate each unit of
the system to identify, analyze and fix the defects.

Unit Testing Life cycle: -

Figure: - 6.1
Advantages of unit testing:

Defects are found at an early stage. Since it is done by the dev team by testing individual
pieces of code before integration, it helps in fixing the issues early on in source code
without affecting other source codes.
It helps maintain the code. Since it is done by individual developers, stress is being put on
making the code less inter dependent, which in turn reduces the chances of impacting
other sets of source code.
It helps in reducing the cost of defect fixes since bugs are found early on in the
development cycle.
It helps in simplifying the debugging process. Only latest changes made in the code need
to be debugged if a test case fails while doing unit testing.

Disadvantages:
 It’s difficult to write good unit tests and the whole process may take a lot of time
 A developer can make a mistake that will affect the whole system.
 Not all errors can be detected, since every module it tested separately and later
different integration bugs may appear.
 Testing will not catch every error in the program, because it cannot evaluate every
execution path in any but the most trivial programs. This problem is a superset of
the halting problem, which is un decidable.
 The same is true for unit testing. Additionally, unit testing by definition only tests
the functionality of the units themselves. Therefore, it will not catch integration
errors or broader system-level errors (such as functions performed across multiple
units, or non-functional test areas such as performance).
 Unit testing should be done in conjunction with other software testing activities, as
they can only show the presence or absence of particular errors; they cannot prove a
complete absence of errors.
 To guarantee correct behavior for every execution path and every possible input,
and ensure the absence of errors, other techniques are required, namely the
application of formal methods to proving that a software component has no
unexpected behavior.
Unit Testing Techniques:
1. Black Box Testing - Using which the user interface, input and output are tested.
2. White Box Testing - used to test each one of those functions' behavior is tested.
3. Gray Box Testing - Used to execute tests, risks and assessment methods.
Integration Testing: - It tests integration or interfaces between components, interactions
to different parts of the system such as an operating system, file system and hardware or
interfaces between systems.
 Also, after integrating two different components together we do the integration testing.
As displayed in the image below when two different modules ‘Module A’ and
‘Module B’ are integrated then the integration testing is done.

Fig 6.2 Figure: - 6.3


 Integration testing is done by a specific integration tester or test team.
 Integration testing follows two approach known as ‘Top Down’ approach and
‘Bottom Up’ approach as shown in the image below:

Below are the integration testing techniques:


1. Big Bang integration testing: - In Big Bang integration testing all components or
modules are integrated simultaneously, after which everything is tested as a whole. As per
the below image all the modules from ‘Module 1’ to ‘Module 6’ are integrated
simultaneously then the Testing is carried out.

Fig 6.4

2. Top-down integration testing: Testing takes place from top to bottom, following the
control flow or architectural structure (e.g., starting from the GUI or main menu).
Components or systems are substituted by stubs. Below is the diagram of ‘Top-down
Approach”

Fig 6.5
3. Bottom-up integration testing: Testing takes place from the bottom of the control
flow upwards. Components or systems are substituted by drivers. Below is the image
of ‘Bottom-up approach’:

Fig6.6

4. System Testing: - System Testing (ST) is a black box testing technique performed
to evaluate the complete system the system's compliance against specified
requirements. In System testing, the functionalities of the system are tested from
an end-to-end perspective. System Testing is usually carried out by a team that is
independent of the development team in order to measure the quality of the
system unbiased. It includes both functional and Non-Functional testing.

Fig 6.7

RESULT: This experiment introduces the concept of unit and integration Testing.

37
Experiment No. 7

Title:- To perform various white box and black box testing techniques.

Objective :-To get familiarized with detailed testing techniques.

S/W Requirement :- Smart Draw

H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron

• Main Memory - 128 MB RAM

• Hard Disk – minimum 20 GB IDE Hard Disk

• Removable Drives–1.44 MB Floppy Disk Drive

–52X IDE CD-ROM Drive

•PS/2 HCL Keyboard and Mouse

Method:-
White Box Testing:

White box testing techniques analyze the internal structures the used data structures, internal
design, code structure and the working of the software rather than just the functionality as in
black box testing. It is also called glass box testing or clear box testing or structural testing.
Working process of white box testing:

 Input: Requirements, Functional specifications, design documents, source code.


 Processing: Performing risk analysis for guiding through the entire process.
 Proper test planning: Designing test cases so as to cover entire code. Execute rinse-
repeat until error-free software is reached. Also, the results are communicated.
 Output: Preparing final report of the entire testing process.
Testing techniques:

 Statement coverage: In this technique, the aim is to traverse all statement at least
once. Hence, each line of code is tested. In case of a flowchart, every node must be
traversed at least once. Since all lines of code are covered, helps in pointing out faulty
code.

38
 Branch Coverage: In this technique, test cases are designed so that each branch from all
decision points are traversed at least once. In a flowchart, all edges must be traversed at
least once.

 Condition Coverage: In this technique, all individual conditions must be covered


as shown in the following example:

1. READ X, Y
2. IF(X == 0 || Y == 0)
3. PRINT ‘0’
In this example, there are 2 conditions: X == 0 and Y == 0. Now, test these conditions get
TRUE and FALSE as their values. One possible example would be:
 #TC1 – X = 0, Y = 55
 #TC2 – X = 5, Y = 0

Path Testing:

In the path testing, we will write the flow graphs and test all independent paths. Here
writing the flow graph implies that flow graphs are representing the flow of the program
and also show how every program is added with one another as we can see in the below
image:

39
And test all the independent paths implies that suppose a path from main() to function G,
first set the parameters and test if the program is correct in that particular path, and in the
same way test all other paths and fix the bugs.

Flow graph notation: It is a directed graph consisting of nodes and edges. Each node represents
a sequence of statements, or a decision point. A predicate node is the one that represents a
decision point that contains a condition after which the graph splits. Regions are bounded by
nodes and edges.

Loop Testing: Loops are widely used and these are fundamental to many algorithms hence,
they're in the loop testing, we will test the loops such as while, for, and do-while, etc. and also
check for ending condition if working correctly and if the size of the conditions is enough.

For example: we have one program where the developers have given about 50,000 loops.

1. {
2. while(50,000)
3. ……
4. ……
5. }

We cannot test this program manually for all the 50,000 loops cycle. So we write a small
program that helps for all 50,000 cycles, as we can see in the below program, that test P is
written in the similar language as the source code program, and this is known as a Unit test. And
it is written by the developers only.

1. Test P
2. {
3. ……
4. …… }

40
Nested loops: For nested loops, all the loops are set to their minimum count and we start
from the innermost loop. Simple loop tests are conducted for the innermost loop and this is
worked outwards till all the loops have been tested.
1. Concatenated loops: Independent loops, one after another. Simple loop tests
are applied for each.
If they’re not independent, treat them like nesting.
Advantages:

1. White box testing is very thorough as the entire code and structures are tested.
2. It results in the optimization of code removing error and helps in removing extra lines of
code.
3. It can start at an earlier stage as it doesn’t require any interface as in case of black box
testing.
4. Easy to automate.
Disadvantages:

1. Main disadvantage is that it is very expensive.


2. Redesign of code and rewriting code needs test cases to be written again.
3. Testers are required to have in-depth knowledge of the code and programming language as
opposed to black box testing.
4. Missing functionalities cannot be detected as the code that exists is tested.
5. Very complex and at times not realistic.

Black Box Testing:

Black-box testing is a method of software testing that examines the functionality of an


application based on the specifications. It is also known as Specifications based testing.
Independent Testing Team usually performs this type of testing during the software testing life
cycle.
This method of test can be applied to each and every level of software testing such as unit,
integration, system and acceptance testing.

Generic steps of black box testing

o The black box test is based on the specification of requirements, so it is examined in the
beginning.
o In the second step, the tester creates a positive test scenario and an adverse test scenario
by selecting valid and invalid input values to check that the software is processing them
correctly or incorrectly.
o In the third step, the tester develops various test cases such as decision table, all pairs test,
equivalent division, error estimation, cause-effect graph, etc.
o The fourth phase includes the execution of all test cases.
o In the fifth step, the tester compares the expected output against the actual output.

41
o In the sixth and final step, if there is any flaw in the software, then it is cured and
tested again.

How to do Black Box Testing


Here are the generic steps followed to carry out any type of Black Box Testing.

 Initially, the requirements and specifications of the system are examined.


 Tester chooses valid inputs (positive test scenario) to check whether SUT processes
them correctly. Also, some invalid inputs (negative test scenario) are chosen to verify
that the SUT is able to detect them.
 Tester determines expected outputs for all those inputs.
 Software tester constructs test cases with the selected inputs.
 The test cases are executed.
 Software tester compares the actual outputs with the expected outputs.
 Defects if any are fixed and re-tested.

Types of Black Box Testing


There are many types of Black Box Testing but the following are the prominent ones -

 Functional testing - This black box testing type is related to the functional
requirements of a system; it is done by software testers.
 Non-functional testing - This type of black box testing is not related to testing of
specific functionality, but non-functional requirements such as performance, scalability,
usability.
 Regression testing – Regression Testing is done after code fixes, upgrades or any
other system maintenance to check the new code has not affected the existing code.

Tools used for Black Box Testing:


Tools used for Black box testing largely depends on the type of black box testing you are doing.

 For Functional/ Regression Tests you can use - QTP, Selenium


 For Non-Functional Tests, you can use - LoadRunner, Jmeter

Black Box Testing Techniques


Following are the prominent Test Strategy amongst the many used in Black box Testing

42
 Equivalence Class Testing: It is used to minimize the number of possible test cases
to an optimum level while maintains reasonable test coverage.
 Boundary Value Testing: Boundary value testing is focused on the values at boundaries.
This technique determines whether a certain range of values are acceptable by the system
or not. It is very useful in reducing the number of test cases. It is most suitable for the
systems where an input is within certain ranges.
 Decision Table Testing: A decision table puts causes and their effects in a matrix.
There is a unique combination in each column.

Advantages of Black Box Testing


Tester can be non-technical.
Used to verify contradictions in actual system and the specifications.
Test cases can be designed as soon as the functional specifications are complete

Disadvantages of Black Box Testing


The test inputs need to be from large sample space.
It is difficult to identify all possible inputs in limited testing time. So writing test
cases is slow and difficult
Chances of having unidentified paths during this testing

RESULT: This experiment introduces the concept of white box and black box testing.

43
Experiment No.8

Title :- Web Application Testing Complete Guide (How To Test A Website).

Objective :- To familiarize with the concept of structured charts

S/W Requirement :- Smart Draw

H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron

• Main Memory - 128 MB RAM

• Hard Disk – minimum 20 GB IDE Hard Disk

• Removable Drives–1.44 MB Floppy Disk Drive

–52X IDE CD-ROM Drive

•PS/2 HCL Keyboard and Mouse

METHOD:

Web testing is the name given to software testing that focuses on web applications.
Complete testing of a web-based system before going live can help address issues before the
system is revealed to the public.

Documentation testing

We should start with the preparatory phase, testing the documentation. The tester studies the
received documentation (analyzes the defined site functionality, examines the final layouts of the
site and makes a website test plan for further testing).

The main artifacts related to the website testing are analyzed on this stage:

 Requirements
 Test Plan
 Test Cases
 Traceability Matrix.

Functionality Testing

44
Functionality Testing of a Website is a process that includes several testing parameters like
user interface, APIs, database testing, security testing, client and server testing and basic website
functionalities. Functional testing is very convenient and it allows users to perform both manual
and automated testing. It is performed to test the functionalities of each feature on the website.

Web based Testing Activities includes:

Test all links in your webpages are working correctly and make sure there are no broken links.
Links to be checked will include -

 Outgoing links
 Internal links
 Anchor Links
 Mail To Links

Test Forms are working as expected. This will include-

 Scripting checks on the form are working as expected. For example- if a user does not fill
a mandatory field in a form an error message is shown.
 Check default values are being populated
 Once submitted, the data in the forms is submitted to a live database or is linked to
a working email address
 Forms are optimally formatted for better readability

Test Cookies are working as expected. Cookies are small files used by websites to primarily
remember active user sessions so you do not need to log in every time you visit a website.
Cookie Testing will include

 Testing cookies (sessions) are deleted either when cache is cleared or when they
reach their expiry.
 Delete cookies (sessions) and test that login credentials are asked for when you next visit
the site.

Test HTML and CSS to ensure that search engines can crawl your site easily. This will include

 Checking for Syntax Errors


 Readable Color Schemas
 Standard Compliance. Ensure standards such W3C, OASIS, IETF, ISO, ECMA, or WS-I
are followed.

Test business workflow- This will include

 Testing your end - to - end workflow/ business scenarios which takes the user through
a series of webpages to complete.

45
 Test negative scenarios as well, such that when a user executes an unexpected
step, appropriate error message or help is shown in your web application.

Usability testing
Usability Testing has now become a vital part of any web-based project. It can be carried out by
testers like you or a small focus group similar to the target audience of the web application.

Test the site Navigation:

 Menus, buttons or Links to different pages on your site should be easily visible and
consistent on all webpages.

Test the Content:

 Content should be legible with no spelling or grammatical errors.


 Images if present should contain an "alt" text

UI (User Interface) testing

User Interface (UI) testing is provided to verify the graphic user interface of your website
meets the specifications.

Here are some verifications for UI testing of a website:

 Compliance with the standards of graphical interfaces


 Design elements evaluation: layout, colors, fonts, font sizes, labels, text boxes, text
formatting, captions, buttons, lists, icons, links
 Testing with different screen resolutions
 Testing of localized versions: accuracy of translation (multilanguage,
multicurrency), checking the length of names of interface elements, etc.
 Testing the graphical user interface on target devices: smartphones and tablets.

Compatibility (Configuration) testing

Compatibility (Configuration) testing is performed to test your website with each one of the
supported software and hardware configurations:

 OS Configuration
 Browser Configuration
 Database Configuration

Cross-platform testing allows evaluating the work of your site in different OS (both desktop
and mobile): Windows, iOS/Mac OS, Linux, Android, and BlackBerry etc.

46
Cross-browser website testing methods help to verify the correct work of the site in different
browser configurations: Mozilla Firefox, Google Chrome, Internet Explorer, and Opera etc.

Database Testing:
Database is one critical component of your web application and stress must be laid to test it
thoroughly. Testing activities will include-

 Test if any errors are shown while executing queries


 Data Integrity is maintained while creating, updating or deleting data in database.
 Check response time of queries and fine tune them if necessary.
 Test data retrieved from your database is shown accurately in your web application

Performance testing
This will ensure your site works under all loads. Software Testing activities will include but not
limited to -

 Website application response times at different connection speeds


 Load test your web application to determine its behavior under normal and peak loads
 Stress test your web site to determine its break point when pushed to beyond
normal loads at peak time.
 Test if a crash occurs due to peak load, how does the site recover from such an event
 Make sure optimization techniques like gzip compression, browser and server-side cache
enabled to reduce load times

Security testing

Security Testing is vital for e-commerce website that store sensitive customer information like
credit cards. Testing Activities will include-

 Test unauthorized access to secure pages should not be permitted


 Restricted files should not be downloadable without appropriate access
 Check sessions are automatically killed after prolonged user inactivity
 On use of SSL certificates, website should re-direct to encrypted SSL pages.

RESULT: This experiment introduces the concept Testing a web application.

47
Experiments

beyond PTU

Syllabus

48
Experiment No. 9

Title:- Creating modules in a structured chart.

Objective :- To get familiar with modules in structured charts

S/W Requirement :- Smart Draw

H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron

• Main Memory - 128 MB RAM

• Hard Disk – minimum 20 GB IDE Hard Disk

• Removable Drives–1.44 MB Floppy Disk Drive

–52X IDE CD-ROM Drive

•PS/2 HCL Keyboard and Mouse

METHOD:
1. Module Names and Numerical Identifiers
Each module must have a module name. Module names should consist of a transitive (or action)
verb and an object noun. Module names and numerical identifiers may be taken directly from
corresponding process names on Data Flow Diagrams or other process charts. The name of the
module and the numerical identifier is written inside the module rectangle. Other than the
module name and number, no other information is provided about the internals of the module.

2. Existing Module
Existing modules may be shown on a Structure Chart. An existing module is represented by
double vertical lines.

3. Unfactoring Symbol

49
An unfactoring symbol is a construct on a Structure Chart that indicates the module will not be a
module on its own but will be lines of code in the parent module. An unfactoring symbol is
represented with a flat rectangle on top of the module that will not be a module when the
program is developed.

An unfactoring symbol reduces factoring without having to redraw the Structure Chart. Use an
unfactoring symbol when a module that is too small to exist on its own has been included on the
Structure Chart. The module may exist because factoring was taken too far or it may be shown
to make the chart easier to understand. (Factoring is the separation of a process contained as
code in one module into a new module of its own).

RESULT: This experiment introduces the concept of creating modules in structured charts

50
Experiment No.10
Title:- Representing interrelationship in modules in structured chart.

Objective :- To get familiar with relationship of modules

S/W Requirement :- Smart Draw

H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron

• Main Memory - 128 MB RAM

• Hard Disk – minimum 20 GB IDE Hard Disk

• Removable Drives–1.44 MB Floppy Disk Drive

–52X IDE CD-ROM Drive

•PS/2 HCL Keyboard and Mouse

METHOD:

Invocation is a connecting line that shows the interrelationship among two modules. Modules
are organized in levels and the invocation lines connect modules between levels. Early forms of
Structure Charts drew arrowheads at the end of a line between two modules to indicate that
program control is passed from one module to the second in the direction of the arrow. Since the
Structure Chart is hierarchical, arrowheads are not necessary. Control is always passed from the
higher level module to the lower level module. Also, eliminating arrowheads reduces clutter on
the chart.

Control Rules Between Modules

51
The root level (i.e., top level) of the Structure Chart contains only one module. Control passes
level by level from the root to lower level modules. Control is always returned to the invoking
module. Control eventually returns to the root. There is a maximum of one control relationship
between any two modules. In other words, if Module A invokes Module B, Module B cannot
also invoke Module A.

If a module is called by more than one module and it is not an existing module, the module
which calls it the most is referred to as its parent. A module can only have one parent even
though it may be called by several modules.

Modules may communicate with other modules only by calling a module or through data or
control couples. Couples are information passed between two modules. A module may not
invoke a module that is higher in the hierarchy than the invoking module.

RESULT: This experiment introduces the concept of relationships among modules in structured
charts

52
Experiment No.11

Title: - Converting DFD into structured chart

Objective: - To get familiar with the DFD and structured charts

S/W Requirement: - Smart Draw

H/W Requirement :-
• Processor – Any suitable Processor e.g. Celeron

• Main Memory - 128 MB RAM

• Hard Disk – minimum 20 GB IDE Hard Disk

• Removable Drives–1.44 MB Floppy Disk Drive

–52X IDE CD-ROM Drive

•PS/2 HCL Keyboard and Mouse

ALGORITHM STEPS

 Break the system into suitably tractable units by means of transaction analysis
 Convert each unit into into a good structure chart by means of transform analysis
 Link back the separate units into overall system implementation

Transaction Analysis
The transaction is identified by studying the discrete event types that drive the system. For
example, with respect to railway reservation, a customer may give the following transaction
stimulus:

The three transaction types here are: Check Availability (an enquiry), Reserve Ticket (booking)
and Cancel Ticket (cancellation). On any given time we will get customers interested in giving
any of the above transaction stimuli. In a typical situation, any one stimulus may be entered

53
through a particular terminal. The human user would inform the system her preference by
selecting a transaction type from a menu. The first step in our strategy is to identify such
transaction types and draw the first level breakup of modules in the structure chart, by creating
separate module to co-ordinate various transaction types. This is shown as follows:

The Main ( ) which is a over-all coordinating module, gets the information about what
transaction the user prefers to do through TransChoice. The TransChoice is returned as a
parameter to Main ( ). Remember, we are following our design principles faithfully in
decomposing our modules. The actual details of how GetTransactionType ( ) is not relevant for
Main ( ). It may for example, refresh and print a text menu and prompt the user to select a choice
and return this choice to Main ( ). It will not affect any other components in our breakup, even
when this module is changed later to return the same input through graphical interface instead of
textual menu. The modules Transaction1 ( ), Transaction2 ( ) and Transaction3 ( ) are the
coordinators of transactions one, two and three respectively. The details of these transactions are
to be exploded in the next levels of abstraction.
We will continue to identify more transaction centers by drawing a navigation chart of all input
screens that are needed to get various transaction stimuli from the user. These are to be factored
out in the next levels of the structure chart (in exactly the same way as seen before), for all
identified transaction centers.
Transform Analysis
Transform analysis is strategy of converting each piece of DFD (may be from level 2 or level 3,
etc.) for all the identified transaction centers. In case, the given system has only one transaction
(like a payroll system), then we can start transformation from level 1 DFD itself. Transform
analysis is composed of the following five steps [Page-Jones, 1988]:

1. Draw a DFD of a transaction type (usually done during analysis phase)


2. Find the central functions of the DFD
3. Convert the DFD into a first-cut structure chart
4. Refine the structure chart
5. Verify that the final structure chart meets the requirements of the original DFD

Let us understand these steps through a payroll system example:

 Identifying the central transform

54
The central transform is the portion of DFD that contains the essential functions of the system
and is independent of the particular implementation of the input and output. One way of
identifying central transform (Page-Jones, 1988) is to identify the centre of the DFD by pruning
off its afferent and efferent branches. Afferent stream is traced from outside of the DFD to a
flow point inside, just before the input is being transformed into some form of output (For
example, a format or validation process only refines the input – does not transform it). Similarly
an efferent stream is a flow point from where output is formatted for better presentation. The
processes between afferent and efferent stream represent the central transform (marked within
dotted lines above). In the above example, P1 is an input process, and P6 & P7 are output
processes. Central transform processes are P2, P3, P4 & P5 - which transform the given input
into some form of output.

 First-cut Structure Chart

To produce first-cut (first draft) structure chart, first we have to establish a boss module. A boss
module can be one of the central transform processes. Ideally, such process has to be more of a
coordinating process (encompassing the essence of transformation). In case we fail to find a boss
module within, a dummy coordinating module is created

55
In the above illustration, we have a dummy boss module “Produce Payroll” – which is named in
a way that it indicate what the program is about. Having established the boss module, the
afferent stream processes are moved to left most side of the next level of structure chart; the
efferent stream process on the right most side and the central transform processes in the middle.
Here, we moved a module to get valid timesheet (afferent process) to the left side (indicated in
yellow).
The two central transform processes are move in the middle (indicated in orange). By grouping
the other two central transform processes with the respective efferent processes, we have created
two modules (in blue) – essentially to print results, on the right side.
The main advantage of hierarchical (functional) arrangement of module is that it leads to
flexibility in the software. For instance, if “Calculate Deduction” module is to select deduction
rates from multiple rates, the module can be split into two in the next level – one to get the
selection and another to calculate. Even after this change, the “Calculate Deduction” module
would return the same value.

 Refine the Structure Chart

Expand the structure chart further by using the different levels of DFD. Factor down till you
reach to modules that correspond to processes that access source / sink or data stores. Once this
is ready, other features of the software like error handling, security, etc. has to be added. A
module name should not be used for two different modules. If the same module is to be used in
more than one place, it will be demoted down such that “fan in” can be done from the higher
levels. Ideally, the name should sum up the activities done by the module and its sub-ordinates.

 Verify Structure Chart vis-à-vis with DFD

Because of the orientation towards the end-product, the software, the finer details of how data
gets originated and stored (as appeared in DFD) is not explicit in Structure Chart. Hence DFD
may still be needed along with Structure Chart to understand the data flow while creating low-
level design.

 Constructing Structure Chart (An illustration)

Let us consider an illustration of structured chart. The following are the major processes in bank:

P1: Saving Procedure


56
P2: Current Procedure

P3: Loan Procedure

P4: DD Procedure

P5: MT Procedure

Since all are major subsystems with its own major processing, we first do transaction analysis on
them

Some characteristics of the structure chart as a whole would give some clues about the quality of the
system. Page-Jones (1988) suggest following guidelines for a good decomposition of structure chart:

 Avoid decision splits - Keep span-of-effect within scope-of-control: i.e. A module can affect only
those modules which comes under it’s control (All sub-ordinates, immediate ones and modules
reporting to them, etc.)
 Error should be reported from the module that both detects an error and knows what the error is.
 Restrict fan-out (number of subordinates to a module) of a module to seven. Increase fan-in
(number of immediate bosses for a module). High fan-ins (in a functional way) improves
reusability.

RESULT: This experiment introduces the concept of DFD and structured charts

57

You might also like