SCH MGT Sys
SCH MGT Sys
management system. It can handle all details about a student. The details include collage
details, subject details, student personnel details, academic details, exam details etc.
Apart from storing, it maintains the information by doing proper updating in the
At present the school management and it’s all procedures are totally manual based. It
creates a lot of problem due to wrong entries or mistakes in totaling etc. This system
avoided such mistakes through proper checks and validation control methods in checking
of student record, fee deposit particulars, teachers’ schedules, examination report, issue
1
OBJECTIVE
The objective of developing such a computerization system is to reduce the paper work
and safe of time in school management. There by increase the efficiency and decreasing
The project provides us the information about student record, school faculty, school
timetable, school fee, and school examination result and library management. The system
must provide the flexibility of generating the required document on screen as well as on
It creates a lot of problem due to wrong entries or mistakes in totaling etc. This
system avoided such mistakes through proper checks and validation control methods
2
examination report, issue of transfer certificates etc. SSMS is a software application
entering student test and other assignment scope through electronic grade book,
college or institute. Also known student records system (SRS), student management
school authorities. In the current system all the activities are done manually. It is very
3
SCOPE
In the present time the scope of this application is very popular because of the less time
for meet people and talk to them so this application helps them to communicate other.
The project provides us the information about student record, school faculty, school
timetable, school fee, and school examination result and library management. The system
must provide the flexibility of generating the required document on screen as well as on
4
SOFTWARE ENGINEERING PARADIGM
APPLIED
Solves the actual problems the software engineer or a team of engineers must incorporate
a developments strategy that encompasses the process, method and tools and the generic
software engineering is chosen based on the nature of the project and application the
method and tools to be used and the controls and deliverable that are required.
As the development process specifies the major development and quality assurance
activity that need to be performed in the project, the development process really forms
the core of the software process. The management process is decided based on the
development process. Due to the importance of the development process various model
have been proposed e.g. Water Fall Model (linear sequential model) , spiral model
prototyping model and RAD model etc. in our project we used the Water Fall Model for
WATERFALL MODEL
The Water Fall model sometimes called classic life cycle model or linear sequential
at the system level and progress through, analysis, design, coding, testing and
maintenance.
Model after the conventional engineering cycle the linear sequential model encompasses
ANALYSIS
5
DESIGNING
Fig.1
MONITOR
MOUSE
6
FRONT END AND BACK END
FRONT END
HISTORY oracle
After four years of development, and a series of beta releases in 2000 and 2001,
[Link] 1.0 was released on January 5, 2002 as part of version 1.0 of the .NET
Framework. Even prior to the release, dozens of books had been written about [Link],
[5]
and Microsoft promoted it heavily as part of its platform for Web services. Scott
Guthrie became the product unit manager for [Link], and development continued
apace, with version 1.1 being released on April 24, 2003 as a part of Windows Server
INTRODUCTION TO oracle
7
Programmers to build dynamic web sites, web applications and web services.
It was first released in January 2002 with version 1.0 of the .NET Framework, and is the
Successor to Microsoft's Active Server Pages (ASP) technology. JSP is built on the
Common Language Runtime (CLR), allowing programmers to write JSP code using
anysupported . JSP language. The JSP SOAP extension framework allows JSP
a modern and modular web framework, together with other frameworks like Entity
Framework. The new framework will make use of the new open-source. JSP is not just a
simple upgrade or the latest version of .JSP. .JSP combines unprecedented developer
productivity with performance, reliability, and deployment. JSP redesigns the whole
process. It's still easy to grasp for new comers but it provides many new ways of
managing projects. Below are the features of JSP Easy Programming Model JSP makes
building real world Web applications dramatically easier. JSP server controls enable an
HTML-like style of declarative programming that let you build great pages with far less
code than with classic ASP. Displaying data, validating user input, and uploading files
are all amazingly easy. Best of all, JSP pages work in all browsers Including Netscape,
Opera, AOL, and Internet Explorer. JSP stands for Active Server Pages .NET and is
developed by Microsoft. [Link] is used to create web pages and web technologies and
allows them to build dynamic, rich web sites and web applications using compiled
languages like VB and C#.[Link] is not limited to script languages, it allows you to
make use of .NET languages like C#, J#, VB, etc. It allows developers to build very
compelling applications by making use of Visual Studio, the development tool provided
8
language runtime that can be used on any Windows server to host powerful [Link]
In the early days of the Web i.e. before the release of Internet Information Services (IIS)
in 1997, the contents of web pages were largely static. These web pages needed to be
constantly, and manually, modified. There was an urgent need to create web sites that
Microsoft’s Active Server Pages (ASP) was brought to the market to meet this need. ASP
executed on the server side, with its output sent to the user’s web browser, thus allowing
the server to generate dynamic web pages based on the actions of the user.
Web. [Link], [Link], and many other popular web sites use JSP as the
1. JSP drastically reduces the amount of code required to build large applications.
compilation, native optimization, and caching services right out of the box.
4. The JSP framework is complemented by a rich toolbox and designer in the Visual
controls, and automatic deployment are just a few of the features this powerful tool
provides.
9
5. Provides simplicity as JSP makes it easy to perform common tasks, from simple form
6. The source code and HTML are together therefore JSP pages are easy to maintain and
write. Also the source code is executed on the server. This provides a lot of power and
7. All the processes are closely monitored and managed by the [Link] runtime, so that
if process is dead, a new process can be created in its place, which helps keep your
8. It is purely server-side technology so, JSP code executes on the server before it is sent
to the browser.
9. Being language-independent, it allows you to choose the language that best applies to
10 JSP makes for easy deployment. There is no need to register components because the
11. The Web server continuously monitors the pages, components and applications
running on it. If it notices any memory leaks, infinite loops, other illegal activities, it
12. Easily works with JSP using data-binding and page formatting features. It is an
application which runs faster and counters large volumes of users without having
performance problems
framework used to create enterprise-class web sites, web applications, and technologies.
10
information management. Whether you are building a small business web site or a large
corporate web application distributed across multiple networks, JSP will provide you all
[Link] is not just a simple upgrade or the latest version of ASP. JSP combines
[Link] redesigns the whole process. It's still easy to grasp for new comers but it
provides many new ways of managing projects. Below are the features of [Link].
CHARACTERISTICS
[Link] Web pages, known officially as Web Forms, are the main building blocks for
application development in [Link].[7] There are two basic methodologies for Web
Forms, a web application format and a web site format. [8] Web applications need to be
compiled before deployment, while web sites structures allow the user to copy the files
directly to the server without prior compilation. Web forms are contained in files with a
".asp" extension; these files typically contain static (X)HTML mark up or component
mark up. The component mark-up can include server-side Web Controls and User
Controls that have been defined in the framework or the web page. For example, there is
runt='server'> which will be rendered into a html input box. Additionally, dynamic code,
which runs on the server, can be placed in a page within a block <% -- dynamic code --
%>, which is similar to other Web development technologies such as PHP, JSP, and
ASP. With [Link] Framework 2.0, Microsoft introduced a new code-behind model
which allows static text to remain on the .asp page, while dynamic code remains in
11
EASY PROGRAMMING MODEL
[Link] makes building real world Web applications dramatically easier. [Link]
server controls enable an HTML-like style of declarative programming that let you build
great pages with far less code than with classic ASP. Displaying data, validating user
input, and uploading files are all amazingly easy. Best of all, JSP pages work in all
JSP lets you leverage your current programming language skills. Unlike classic ASP,
which supports only interpreted VBScript and J Script, JSP now supports more than
25 .JSP languages (built-in support for [Link], C#, and [Link]), giving us
We can harness the full power of .JSP using any text editor, even Notepad. But Visual
Studio .NET adds the productivity of Visual Basic-style development to the Web. Now
we can visually design .JSP Web Forms using familiar drag-drop-double click
techniques, and enjoy full-fledged code support including statement completion and
deploying .JSP Web applications. The Enterprise versions of Visual Studio .NET deliver
life-cycle features to help organizations plan, analyse, design, build, test, and coordinate
teams that develop .JSP Web applications. These include UML class modelling, database
performance and scalability), and enterprise frameworks and templates, all available
12
RICH CLASS FRAMEWORK
component, can now be added in just a few lines of code using the .NET Framework.
The .NET Framework offers over 4500 classes that encapsulate rich functionality like
XML, data access, file upload, regular expressions, image generation, performance
monitoring and logging, transactions, message queuing, SMTP mail, and much more.
With Improved Performance and Scalability [Link] lets we use serve more users with
COMPILED EXECUTION
.JSP is much faster than classic ASP, while preserving the "just hit save" update model of
ASP. However, no explicit compile step is required. .JSP will automatically detect any
changes, dynamically compile the files if needed, and store the compiled results to reuse
for subsequent requests. Dynamic compilation ensures that the application is always up
to date, and compiled execution makes it fast. Most applications migrated from classic
Oracle output caching can dramatically improve the performance and scalability of the
application. When output caching is enabled on a page, Oracle executes the page just
once, and saves the result in memory in addition to sending it to the user. When another
13
user requests the same page, Oracle serves the cached result from memory without re-
executing the page. Output caching is configurable, and can be used to cache individual
regions or an entire page. Output caching can dramatically improve the performance of
data-driven pages by eliminating the need to query the database on every request.
ENHANCED RELIABILITY
.JSP automatically detects and recovers from errors like deadlocks and memory leaks to
ensure our application is always available to our users. For example, say that our
application has a small memory leak, and that after a week the leak has tied up a
significant percentage of our server's virtual memory. .JSP will detect this condition,
automatically start up another copy of the Oracle worker process, and direct all new
requests to the new process. Once the old process has finished processing its pending
EASY DEPLOYMENT
Oracle takes the pain out of deploying server applications. "No touch" application
we can deploy an entire application as easily as an HTML page; just copy it to the server.
No need to run regsvr32 to register any components, and configuration settings are stored
14
DYNAMIC UPDATE OF RUNNING APPLICATION
.JSP now let’s we update compiled components without restarting the web server. In the
past with classic COM components, the developer would have to restart the web server
each time he deployed an update. With Oracle, we simply copy the component over the
existing DLL; Oracle will automatically detect the change and start using the new code.
controls similar to a Windows user interface. A Web control, such as a button or label,
functions in very much the same way as its Windows counterparts: code can assign its
properties and respond to its events. Controls know how to render themselves: whereas
Windows controls draw themselves to the screen, Web controls produce segments of
HTML and JavaScript which form parts of the resulting page sent to the end-user's
browser.
Oracle Web Forms encourages the programmer to develop applications using an driving
model, rather than in conventional Web-scripting environments like ASP and PHP. The
like "View State" to bring persistent (inter-request) state to the inherently stateless Web
environment.
15
INTRODUCTION TO .NET
Microsoft Windows. It includes a large class library known as Framework Class Library
(FCL) and provides language interoperability (each language can use code written in
other languages) across several programming languages. Programs written for .NET
such as security, memory management, and exception handling. FCL and CLR together
with Oracle Framework and other libraries. Oracle Framework is intended to be used by
most new applications created for the Windows platform. Microsoft Also produces an
integrated development environment largely for .NET software called Visual Studio.
Oracle Framework started out as a proprietary framework, although the company worked
tostandardize the software stack almost immediately, even before its first release. Despite
software patents. Since then, Microsoft has changed .NET development to more closely
16
issuing an update to its patent that promises to address the concerns..NET Framework
family also includes two versions for mobile or embedded device use. Areduced version
C# LANGUAGE
his team during the development of .Net Framework. C# is designed for Common
platforms and architectures. The following reasons makeC# a widely used professional
Framework.
17
BACK END
Oracle 10g
Oracle tutorial gives unique learning on Structured Query Language and it helps to make
database, it includes database creation, deletion, fetching rows and modifying rows etc.
Oracle is an ANSI (American National Standards Institute) standard, but there are many
What is Oracle?
manipulating and retrieving data stored in relational database. SQL is the standard
language for Relation Database System. All relational database management systems like
me, MS Access, Oracle, Sybase,Informix,Postures and SQL Server use SQL as standard
database language. Also, they are using different dialects, such as:
3. Allows users to define the data in database and manipulate that data.
18
4. Allows embedding within other languages using SQL modules, libraries & pre-
compilers.
SQL Server 2005 (formerly codenamed "Yukon") released in October 2005. It included
native support for managing XML data, in addition to relational data. For this purpose, it
defined an xml data type that could be used either as a data type in database columns or
as literals in queries. XML columns can be associated with XSD schemas; XML data
being stored is verified against the schema. XML is converted to an internal binary data
type before being stored in the database. Specialized indexing methods were made
available for XML data. XML data is queried using Query; SQL Server 2005 added
some extensions to the T-SQL language to allow embedding Query queries in T-SQL. In
addition, it also defines a new extension to Query, called XML DML, that allows query-
based modifications to XML data. SQL Server 2005 also allows a database server to be
exposed over web services using Tabular Data Stream (TDS) packets encapsulated
within SOAP (protocol) requests. When the data is accessed over web services, results
Common Language Runtime (CLR) integration was introduced with this version,
enabling one to write SQL code as Managed Code by the CLR. For relational data, T-
SQL has been augmented with error handling features (try/catch) and support for
recursive queries with CTEs (Common Table Expressions). SQL Server 2005 has also
been enhanced with new indexing algorithms, syntax and better error recovery systems.
Data pages are check summed for better error resiliency, and optimistic concurrency
support has been added for better performance. Permissions and access control have been
made more granular and the query processor handles concurrent execution of queries in a
more efficient way. Partitions on tables and indexes are supported natively, so scaling out
19
a database onto a cluster is easier. SQL CLR was introduced with SQL Server 2005 to let
SQL Server 2005 introduced Multi-Version Concurrency Control. User facing features
include new transaction isolation level called SNAPSHOT and a variation of the READ
SQL Server 2005 introduced "MARS" (Multiple Active Results Sets), a method of
SQL Server 2005 introduced DMVs (Dynamic Management Views), which are
specialized views and functions that return server state information that can be used to
monitor the health of a server instance, diagnose problems, and tune performance.
Service Pack 1 (SP1) of SQL Server 2005 introduced Database Mirroring, a high
availability option that provides redundancy and failover capabilities at the database
level.[11] Failover can be performed manually or can be configured for automatic failover.
(also known as high-safety or full safety).[12] Database Mirroring was included in the first
release of SQL Server 2005 for evaluation purposes only. Prior to SP1, it was not
A database management, or DBMS, gives the user access to their data and helps
them transform the data into information. Such database management systems include
dBase, paradox, IMS, SQL Server .These systems allow users to create, update and
Data refers to the characteristics of people, things and events. SQL Server stores each
data item in its own fields. In SQL Server, the fields relating to a particular person, thing
20
or event are bundled together to form a single complete unit of data, called a record (it
fields. No two fields in a record can have the same field name
SQL Server 2005 (formerly codenamed "Yukon") released in October 2005. It included
native support for managing XML data, in addition to relational data. For this purpose, it
defined an xml data type that could be used either as a data type in database columns or
as literals in queries. XML columns can be associated with XSD schemas; XML data
being stored is verified against the schema. XML is converted to an internal binary data
type before being stored in the database. Specialized indexing methods were made
available for XML data. XML data is queried using Query; SQL Server 2005 added
some extensions to the T-SQL language to allow embedding Query queries in T-SQL. In
addition, it also defines a new extension to Query, called XML DML that allows query-
based modifications to XML data. SQL Server 2005 also allows a database server to be
exposed over web services using Tabular Data Stream (TDS) packets encapsulated
within SOAP (protocol) requests. When the data is accessed over web services, results
with this version, enabling one to write SQL code as Managed Code by the CLR. For
relational data, T-SQL has been augmented with error handling features (try/catch) and
support for recursive queries with CTEs (Common Table Expressions). SQL Server 2005
has also been enhanced with new indexing algorithms, syntax and better error recovery
systems. Data pages are check summed for better error resiliency, and optimistic
concurrency support has been added for better performance. Permissions and access
control have been made more granular and the query processor handles concurrent
execution of queries in a more efficient way. Partitions on tables and indexes are
supported natively, so scaling out a database onto a cluster is easier. SQL CLR was
21
introduced with SQL Server 2005 to let it integrate with the .NET [Link]
Server 2005 introduced "MARS" (Multiple Active Results Sets), a method of allowing
usage of database connections for multiple [Link] Server 2005 introduced DMVs
(Dynamic Management Views), which are specialized views and functions that return
server state information that can be used to monitor the health of a server instance,
diagnose problems, and tune performance. SQL Server 2005 introduced Database
Mirroring, a high availability option that provides redundancy and failover capabilities at
the database level. Failover can be performed manually or can be configured for
automatic failover. Automatic failover requires a witness partner and an operating mode
and maximize capital and operating budgets. SQL Server 2005 provides the enterprise
environment.
With the lowest implementation and maintenance costs in the industry, SQL Server 2005
delivers rapid return on the data management investment. SQL Server 2005 supports the
rapid development of enterprise-class business applications that can give our company a
Benchmarked for scalability, speed, and performance, SQL Server 2005 is a fully
22
ARCHITECTURE
The protocol layer implements the external interface to SQL Server. All operations that
called Tabular Data Stream (TDS). TDS is an application layer protocol, used to transfer
data between a database server and a client. Initially designed and developed by Sybase
Inc. for their Sybase oracle Server relational database engine in 1984, and later by
Microsoft in Microsoft SQL Server, TDS packets can be encased in other physical
transport dependent protocols, including TCP/IP, named pipes, and shared memory.
Consequently, access to SQL Server is available over these protocols. In addition, the
DATA STORAGE
Data storage is a database, which is a collection of tables with typed columns. SQL
Server supports different data types, including primary types such as Integer, Float,
Decimal, Char (including character strings), Varchar (variable length character strings),
binary (for unstructured blobs of data), Text (for textual data) among others. The
Microsoft SQL Server also allows user-defined composite types (UDTs) to be defined
and used. It also makes server statistics available as virtual tables and views (called
contain other objects including views, stored procedures, indexes and constraints, along
with a transaction log. A SQL Server database can contain a maximum of 2 31 objects, and
can span multiple OS-level files with a maximum file size of 2 60 bytes (1 Exabyte). The
23
data in the database are stored in primary data files with an extension .miff. Secondary
data files, identified with a .nod extension, are used to allow the data of a single database
to be spread across more than one file, and optionally across more than one file system.
Storage space allocated to a database is divided into sequentially numbered pages, each 8
KB in size. A page is the basic unit of I/O for ORACLE Server operations. A page is
marked with a 96-byte header which stores metadata about the page including the page
number, page type, free space on the page and the ID of the object that owns it. Page type
defines the data contained in the page: data stored in the database, index, allocation map
which holds information about how pages are allocated to tables and indexes, change
map which holds information about the changes made to other pages since last backup or
logging, or contain large data types such as image or text. While page is the basic unit of
pages. A database object can either span all 8 pages in an extent ("uniform extent") or
share an extent with up to 7 more objects ("mixed extent"). A row in a database table
cannot span more than one page, so is limited to 8 KB in size. However, if the data
exceeds 8 KB and the row contains Varchar or Varbinary data, the data in those columns
are moved to a new page (or possibly a sequence of pages, called an Allocation unit) and
For physical storage of a table, its rows are divided into a series of partitions (numbered
1 to n). The partition size is user defined; by default all rows are in a single partition. A
table is split into multiple partitions in order to spread a database over a computer cluster.
Rows in each partition are stored in either B-tree or heap structure. If the table has an
associated, clustered index to allow fast retrieval of rows, the rows are stored in-order
24
according to their index values, with a B-tree providing the index. The data is in the leaf
node of the leaves, and other nodes storing the index values for the leaf data reachable
from the respective nodes. If the index is non-clustered, the rows are not sorted according
to the index keys. An indexed view has the same storage structure as an indexed table. A
table without a clustered index is stored in an unordered heap structure. However, the
table may have non-clustered indices to allow fast retrieval of rows. In some situations
the heap structure has performance advantages over the clustered structure. Both heaps
USER-DEFINED FUNCTIONS
ORACLE Server has always provided the ability to store and execute ORACLE code
routines via stored procedures. In addition, ORACLE Server has always supplied a
number of built-in functions. Functions can be used almost anywhere an expression can
be specified in a query. This was one of the shortcomings of stored procedures they
couldn't be used in line in queries in select lists, where clauses, and so on. Perhaps we
want to write a routine to calculate the last business day of the month. With a stored
procedure, we have to exec the procedure, passing in the current month as a parameter
and returning the value into an output variable, and then use the variable in our queries. If
only we could write our own function that we could use directly in the query just like a
INDEXED VIEWS
Views are often used to simplify complex queries, and they can contain joins and
aggregate functions. However, in the past, queries against views were resolved to queries
against the underlying base tables, and any aggregates were recalculated each time we
25
ran a query against the view. In ORACLE Server 2005 Enterprise or Developer Edition,
we can define indexes on views to improve query performance against the view. When
creating an index on a view, the result set of the view is stored and indexed in the
ORACLE Server 7.0 provided the ability to create partitioned views using the UNION
ALL statement in a view definition. It was limited, however, in that all the tables had to
reside within the same SQL Server where the view was defined. ORACLE Server 2005
tables across multiple SQL Servers. The feature helps to scale out one database server to
multiple database servers, while making the data appear as if it comes from a single table
ORACLE Server 2005 introduces three new data types. Two of these can be used as data
types for local variables, stored procedure parameters and return values, user-defined
function parameters and return values, or table columns:Biginto An 8-byte integer that
can store values from -263 through [Link] variant A variable-sized column that can
store values of various ORACLE Server-supported data types, with the exception of text,
The third new data type, the table data type, can be used only as a local variable data type
within functions, stored procedures, and ORACLE batches. The table data type cannot be
data type. A variable defined with the table data type can be used to store a result set for
26
later processing. A table variable can be used in queries anywhere a table can be
specified.
In previous versions of ORACLE Server, text and image data was always stored on a
separate page chain from where the actual data row resided. The data row contained only
a pointer to the text or image page chain, regardless of the size of the text or image data.
ORACLE Server 2005 provides a new text in row table option that allows small text and
image data values to be placed directly in the data row, instead of requiring a separate
data page. This can reduce the amount of space required to store small text and image
data values, as well as reduce the amount of I/O required to retrieve rows containing
CASCADING RI CONSTRAINTS
was aborted with an error message. ORACLE Server 2005 provides the ability to specify
the action to take when a column referenced by a foreign key constraint is updated or
deleted. We can still abort the update or delete if related foreign key records exist by
specifying the NO ACTION option, or we can specify the new CASCADE option, which
will cascade the update or delete operation to the related foreign key records.
Previous versions of ORACLE Server supported the running of only a single instance of
27
versions of ORACLE Server required switching back and forth between the different
ORACLE Server 2005 provides support for running multiple instances of ORACLE
Server on the same system. This allows us to simultaneously run one instance of
ORACLE Server 6.5 or 7.0 along with one or more instances of ORACLE Server 2005.
Each ORACLE Server instance runs independently of the others and has its own set of
system and user databases, security configuration, and so on. Applications can connect to
the different instances in the same way they connect to different ORACLE Servers on
different machines.
XMLSUPPORT
describe the contents of a set of data and how the data should be output or displayed on a
Web page. XML, like HTML, is derived from the Standard Generalize Mark-up
needs to take place from the result set returned from ORACLE Server to a format that
LOG SHIPPING
The Enterprise Edition of ORACLE Server 2005 now supports log shipping, which we
can use to copy and load transaction log backups from one database to one or more
databases on a constant basis. This allows you to have a primary read/write database with
one or more read only copies of the database that are kept synchronized by restoring the
logs from the primary database. The destination database can be used as a warm standby
28
for the primary database, for which we can switch users over in the event of a primary
29
Analysis and Design
30
SOFTWARE REQUIREMENT
SPECIFICATION (SRS)
medium between the customer and the supplier. When the software requirement
specification iscompleted and is accepted by all parties, the end of the requirements
engineering phase has beenreached. This is not to say, that after the acceptance phase,
any of the requirements cannot be changed, but the changes must be tightly controlled.
The software requirement specification should be edited by both the customer and the
supplier, as initially neither has both the knowledge of what is required (the supplier) and
REQUIRED?
the author of the document has to write the document in such a way that it is general
time containing any constraints which must beapplied. In the second instance, the
31
WHAT IS CONTAINED IN A SOFTWARE REQUIREMENTS
SPECIFICATION?
A software requirement specification in its most basic form is a formal document used in
Communicating the software requirements between the customer and the developer. With
this inmind then the minimum amount of information that the software requirement
parties. The types ofrequirements are defined. The requirements, to fully satisfy the user
should havethe characteristics are defined. However the requirements will only give a
narrowview of the system, so more information is required to place the system into a
context whichdefines the purpose of the system, an overview of the systems functions
and the type of user thatthe system will have. This additional information will aid the
developer in creating a software system which will be aimed at the user’s ability and the
clients function.
32
Whilst requirements are being collated and analyzed, they are segregated into type
categories.
1. Functional requirements
2. Performance requirements
3. Interface requirements
4. Operational requirements
5. Resource requirements
6. Verification requirements
8. Documentation requirements
9. Quality requirements
FUNCTIONAL REQUIREMENTS
33
Functional or behavioral requirements are a sub-set of the overall system requirements.
These requirements are used to consider trade-offs system behavior, redundancy and
the benefits of each. Behavioral requirements, as well as describing how the system will
operate under normal operation should also consider the consequences and response due
PERFORMANCE REQUIREMENTS
All performance requirements must have a value which is measurable and quantitative,
values, such as rate, frequency, speeds and levels. The values specified must also be in
some recognized unit, for example meters, centimeter square, BAR, kilometers per hour,
etc. The performance values are based either on values extracted from the system
INTERFACE REQUIREMENTS
Interface requirements, at this stage are handled separately, with hardware requirements
being derived separately from the software requirements. Software interfaces include
dealing with an existing software system, or any interface standard that has been
requested. Hardware requirements, unlike software give room for trade-offs if they are
not fully defined, however all assumptions should be defined and carefully documented.
OPERATIONAL REQUIREMENTS
34
Operational requirements give an "in the field" view to the specification, detailing such
things as: how the system will operate, what is the operator syntax, how the system will
communicate with the operators, how many operators are required and their qualification,
the system, any error messages and how they are displayed, and what the screen layout
looks like.
RESOURCE REQUIREMENTS
Resource requirements divulge the design constraints relating to the utilization of the
system Hardware. Software restrictions may be placed on only using specific, certified,
or mean use of the available memory and the amount of memory available. The
VERIFICATION REQUIREMENTS
Verification requirements take into account how customer acceptance will be conducted
at the platoon of the project. Here a reference should be made to the verification plan
document.
Verification requirements specify how the functional and the performance requirements
are to be measured and verified. The measurements taken may include simulation,
emulation and livetests with real or simulated inputs. The requirements should also state
35
Acceptance test requirements detail the types of tests which are to be performed prior to
DOCUMENTATION REQUIREMENTS
either through or at the end of the project. The documentation supplied to the client may
include projectspecific documentation as well as user guides and any other relevant
documentation.
QUALITY REQUIREMENTS
Quality requirements will specify any international as well as local standards which
should be adhered to. The quality requirements should be addressed in the quality
assurance plan, which isa core part of the quality assurance document. Typical quality
subsections detailing relevant quality criteria and how they will be met. These sections
are –
Quality Factors
Correctness
Reliability
Efficiency
Integrity
Usability
Maintainability
Testability
36
Flexibility
Portability
Reusability
Interoperability
Additional Factors
Some of these factors can be addressed directly by requirements, for example, reliability
can be stated as an average period of operation before failure. However most of the
factors detailed above are subjective and may only be realized during operation or post
delivery maintenance. For example, the system may be vigorously tested, but it is not
always possible to test allpermutations of possible inputs and operating conditions. For
this reason errors may be found in the delivered system. With correctness the
subjectiveness of how correct the system is, is still open to interpretation and needs to be
put into context with the overall system and its intended usage. An example of this can
be taken from the recently publicized 15th point rounding error found in Pentium
(processors. In the whole most users of the processor will not be interested in values of
that order, so as far as they are concerned, the processor meets their correctness quality
this level or error may mean that the processor does not have the required quality of
correctness.
SAFETY REQUIREMENTS
37
Safety requirements cover not only human safety, but also equipment and data safety.
Human safety considerations include protecting the operator from moving parts,
electrical circuitry and other physical dangers. There may be special operating
Equipment safety includes safeguarding the software system from unauthorized access
monitor used in the system will conform to certain screen emission standards or that the
RELIABILITY REQUIREMENTS
Reliability requirements are those which the software must meet in order to perform a
specificfunction under certain stated conditions, for a given period of time. The level of
reliability requirement can be dependent on the type of system, i.e. the more critical or
life threatening the system, the higher the level of reliability required. Reliability can be
measured in a number of ways including number of bugs per x lines of code, mean time
to failure as a percentage of the time the system will be operational before crashing or an
error occurring. Davis states however that the mean time to failure and percent reliability
should not be an issue as if the software is fully tested, the error will either show itself
during the initial period of use, if the system is asked to perform a function it was not
changed [Davis ’90]. Davis suggests the following hierarchy when considering the detail
numbers of human beings Kill a few people Injure people Cause major financial loss
Cause major embarrassment Cause minor financial loss Cause mild Inconvenience.
MAINTAINABILITY REQUIREMENTS
38
Maintainability requirements look at the long term lift of the proposed system.
Requirements
Should take into consideration any expected changes in the software system, any changes
software
1. Actors
2. Case Diagram
ACTORS
Admin
Teacher
Student
[Link]
Admin means the administrative of the project who handle the allover project
functionality. Hehas full authorization to add or delete any user. He performs as the lead
WORK OF ADMIN
39
[Link] he must login into the application or signup into the app.
2. TEACHER
Teacher is also perform a main actor in this project because he can teach subject, course
WORK OF TEACHER
1. Like the admin teacher must also login or signup into the app.
3. STUDENT
2. Edit profile.
CASE DIAGRAM
40
A case diagram is a graphical depiction of the interaction among the element of the
service orientedtask.
41
CASE DIAGRAM
42
OUT OF
R EC SYSTEM
R
IN CO
T
LOGIN COR
CT RE IN THE
SYSTEM
D AT
DA
UP
A
U P P
UPD
TE
DA
ADMI U
E ET
TE
N ADD TEACHE COURS
STUDENT E&FEES
NOTICE R
DETAIL
DETAIL
This is the Admin case diagram. In this diagram admin firstly login into the application
or signup and then he can configure the user, manage user, update data base, create user
and can add some new feature. It is tells us how a process may relates with actor. To
perform any action we mustinsert data into the application, so it can easy flow of control
to manage the functionality of the application. It is very easy to understand the work of
43
administrative. It pronounced like case diagram of the admin because of the
diagrammatic form of the process control that means to know how a case sensitivity of
diagram.
44
OUT OF
R E SYSTEM
OR
INC
T
LOGIN COC
CT RRE
IN THE
TEACHER SYSTEM
I UP
ROV TE DA
H C
P
TE A
DE
ASSIGNM LEARNIN ATTENDEN
ENT G CE
45
STUDENT CASE DIAGRAM
R RE OUT OF
IN CO
SYSTEM
LOGIN CT
CO
CT R R
E
STUDE IN THE
NT SYSTEM
CH
E C CHEC K E C
C H K
ASSIGNM
K
ATTENDE COMME
ENT NCE NTS
46
USE CASE DIAGRAM
47
USE CASE DIAGRAM
A Use case diagram is a graphical depiction of the interaction among the element of the
system. A use case is a methodology used in system analysis is to identify, clarify and
organize system requirement. In this context, the term “system” refers to something
being developed or operate, such as a mail-order product sales and service web sites.
design, testing and debugging a software product under development creating an online
48
USE CASE DIAGRAM
LOGIN
IN THE
SYSTEM
ASSIGNME
NT
LEARNING
ATTENDEN
TEACHE CE
COMMENT
R NOTICE
STUDENT
DETAIL ADMI
TEACHER
DTL N
COURCE &
STUDE FEE
LOG OUT
NT
49
DATA DICTIONARY
A tool for recording and processing information (metadata) about the data that an
organizationuses. A central catalogue for metadata. Can be integrated within the DBMS
executing [Link] be used as a repository for common code (e.g. library routines).
BENEFITS OF A DD
Benefits of a DDS are mainly due to the fact that it is a central store of information about
5. Simpler programming.
DD FACILITIES
2. To record and analyses data requirements independently of how they are going to be
50
Met - conceptual data models (entities, attributes, and relationships).
3. To record and design decisions in terms of database or file structures implemented and
4. One of the main functions of a DDS is to show the relationship between the conceptual
DD INFORMATION
3. Details of ownership.
5. Details of the systems and programs which refer to or update the element.
6. Details on any privacy constraints that should be associated with the item.
7. Details about the data element in data processing systems, such as the length of the
data item in characters, whether it is numeric alphabetic or another data type, and what
10. The validation rules for each element (e.g. acceptable values).
DD MANAGEMENT
51
1. With so much detail held on the DD, it is essential that an indexing and cross-
2. The DDS can produce reports for use by the data administration staff (to investigate
the Efficiency of use and storage of data), systems analysts, programmers, and users.
3. DD can provide a pre-printed form to aid data input into the database and DD.
4. A query language is provided for ad-hoc queries. If the DD is tied to the DBMS, then
MANAGEMENT OBJECTIVES
computer Project.
2. Provide details of applicationsusage and their data usage once a system has been
changes.
ADVANCED FACILITIES
1. Automatic input from source code of data definitions (at compile time).
2. The recognition that several versions of the same programs or data structures may exist
at the same time. –live and test states of the programs or data. –programs and data
52
structures which may be used at different sites. –data set up under different software or
validation routine.
validation Routines.
MANAGEMENT ADVANTAGES
2. Allows accurate assessment of cost and time scale to effect any changes.
3. Reduces the clerical load of database administration, and gives more control over the
5. Aid the recording, processing, storage and destruction of data and associated
documents.
MANAGEMENT DISADVANTAGES
3. It needs careful planning, defining the exact requirements designing its contents,
4. The cost of a DD includes not only the initial price of its installation and any hardware
53
Requirements, but also the cost of collecting the information entering it into the DD,
In our project there are four data table which are library record, marks record, attendance
record, and subject record. These records tell us what are the attribute of each record?
54
DATA DICTIONARY
55
LIBRARY RECORD TABLE
56
MARKS RECORD TABLE
57
ATTENDANCE RECORD TABLE
58
SUBJECT RECORD TABLE
59
Data flow diagrams were proposed by Larry Constantine, the original developer of
structured design, based on Martin and Strain’s "Data Flow Graph" model of
computation. Starting in the 1970s, data flow diagrams (DFD) became a popular way to
visualise the major steps and data involved in software system processes. DFDs were
usually used to show data flows in a computer system, although they could in theory be
applied to business process modelling. DFD were useful to document the major data
A data flow diagram (DFD) is a graphical representation of the "flow" of data through
preliminary step to create an overview of the system, which can later be elaborated.
DFDs can also be used for the visualization of data processing (structured design).
A DFD shows what kind of information will be input to and output from the system,
where the data will come from and go to, and where the data will be stored. It does not
show information about the timing of process or information about whether processes
A DFD also known as ‘bubble chart’ has the purpose of clarifying system requirements
and identifying major transformations. It shows the flow of data through a system. It is a
graphical tool because it presents a picture. The DFD may be partitioned into levels that
DATA FLOW
60
PROCESS
EXTERNAL ENTITY
DATA STORE
DATA FLOW
The previous three symbols may be interconnected with data flows. These represent the
flow of data to or from a process. The symbol is an arrow and next to it a brief
description of the data that is represented. There are some interconnections, though, that
• Betweena data store and another data store. This would imply that one data store could
independently decide to send some of information to another data store. In practice this
must involve a process between an external entity and a data store. This would mean that
an external entity could read or write to the data stores having direct access. Again in
practice this must involve a process. Also, it is unusual to show interconnections between
external entities. We are not normally concerned with information exchanges between
two external entities as they are outside our system and interest to us.
The data flow is used to describe the movement of information from one part of the
system to another part. Flows represent data in motion. It is a pipe line through which
Data flow
Fig.6 data flow
PROCESS
Processes are actions that are carried out with the data that flows around the system. A
process accepts input data needed for the process to be carried out and produces data that
61
it passes on to another part of the DFD. The processes that are identified on a design
DFD will be provided in the final artefact. They may be provided for using special
screens for input and output or by the provision of specific buttons or menu items. Each
identifiable process must have a well-chosen process name that describes what the
process will do with the information it uses and the output it will produce. Process names
must be well chosen to give a precise meaning to the action to be taken. It is good
practice to always start with a strong verb and to follow with not more than four or five
words.
Try to avoid using the verb ‘process’, otherwise it is easy to use this for every process.
We already know from the symbol it is a process so this does not help us to understand
A circle or bubble represents a process that transforms incoming data to outgoing data.
Process
Fig.7 attribute
EXTERNAL ENTITY
External entities are those things that are identified as needing to interact with the system
under consideration. The external entities either input information to the system, output
62
information from the system or both. Typically they may represent job titles or other
systems that interact with the system to be built. Some examples are given below in
Figure 1. Notice that the SSADM symbol is an ellipse. If the same external entity is
shown more than once on a diagram (for clarity) a diagonal line indicates this.
A square defines a source or destination of system Information from the system but is not
a part of the system. External entities represent any entity that supplies or receive.
EXTERNAL
\
ENTITY
Fig.8 external entity
DATA STORE
Data stores are places where data may be stored. This information may be stored either
temporarily or permanently by the user. In any system you will probably need to make
some assumptions about which relevant data stores to include. How many data stores you
place on a DFD somewhat depends on the case study and how far you go in being
specific about the information stored in them. It is important to remember that unless we
The data store represents a logical file. A logical file can represent either a data store
symbol which can represent either a data structure or a physical file on disk.
Data store
63
Fig.9 Data store
THEORY
This context-level DFD is next "exploded", to produce a Level 1 DFD that shows some
of the detail of the system being modelled. The Level 1 DFD shows how the system is
divided into sub-systems (processes), each of which deals with one or more of the data
flows to or from an external agent, and which together provide all of the functionality of
the system as a whole. It also identifies internal data stores that must be present in order
for the system to do its job, and shows the flow of data between the various parts of the
system.
Data flow diagrams are one of the three essential perspectives of the structured-systems
analysis and design method SSADM. The sponsor of a project and the end users will
need to be briefed and consulted throughout all stages of a system's evolution. With a
data flow diagram, users are able to visualize how the system will operate, what the
system will accomplish, and how the system will be implemented. The old system's
dataflow diagrams can be drawn up and compared with the new system's data flow
diagrams to draw comparisons to implement a more efficient system. Data flow diagrams
can be used to provide the end user with a physical idea of where the data they input
ultimately has an effect upon the structure of the whole system from order to dispatch to
report. How any system is developed can be determined through a data flow diagram
model. In the course of developing a set of levelled data flow diagrams the
analyst/designer is forced to address how the system may be decomposed into component
sub-systems, and to identify the transaction data in the data model. Data flow diagrams
64
There are different notations to draw data flow diagrams (Yourdon &Coda and
Gane&Sarson), defining different visual representations for processes, data stores, data
PHYSICAL DFD
A physical DFD shows how the system is actually implemented, either at the moment
(Current Physical DFD), or how the designer intends it to be in the future (Required
Physical DFD). Thus, a Physical DFD may be used to describe the set of data items that
appear on each piece of paper that move around an office, and the fact that a particular
set of pieces of paper are stored together in a filing cabinet. It is quite possible that a
Physical DFD will include references to data that are duplicated, or redundant, and that
the data stores, if implemented as a set of database tables, would constitute an un-
capture the data flow aspects of a system in a form that has neither redundancy nor
duplication.
‘0’LEVEL DFD
Context Diagrams and DFD Layers and Levels. Context Diagram. A context diagram is a
top level (also known as "Level 0") data flow diagram. It only contains one process node
("Process 0") that generalizes the function of the entire system in relationship to external
entities. A context level DFD is the most basic form of DFD. It aims to show how the
entire system works at a glance. There is only one process in the system and all the data
flows either into or out of this process. Context level DFD’s demonstrates the
interactions between the process and external entities. They do not contain Data
[Link] drawing Context Level DFD’s, we must first identify the process, all the
65
external entities and all the data flows. We must also state any assumptions we make
about the system. It is advised that we draw the process in the middle of the page. We
then draw our external entities in the corners and finally connect our entities to our
This DFD provides an overview of the data entering and leaving the system. It also
shows the entities that are providing or receiving that data. These correspond usually to
the people that are using the system we will develop. The context diagram helps to define
our system boundary to show what is included in, and what is excluded from, our system.
66
‘0’ LEVEL DFD
STUDENT
TEACHER ADMIN
RESPON
ID&PA
ME
SS NA
ROLL
MARKS,ATTEND,G
OF
RAD,REMARKS,TE
NO
NAME,MO
BILE,ADDR
RESPON
BIRTH
ID&PAS
ID&PA
DATE
SRECOR
CE
RESPO
MARK
VIEW,ABUTE
ME
DNA
NCE
STUDENTSS
CE
STUDENT
ESS
S
ACHER
SCORE
Virtual
MANAGEM
classroom
ENT
SYSTEM
Fig.10 ‘0’ LEVEL DFD
LEVEL 1 DFD
Level 1 DFD’s aim to give an overview of the full system. They look at the system in
more detail. Major processes are broken down into sub-processes. Level 1 DFD’s also
67
identifies data stores that are used by the major processes. When constructing a Level 1
DFD, we must start by examining the Context Level DFD. We must break up the single
process into its sub-processes. We must then pick out the data stores from the text we are
given and include them in our DFD. Like the Context Level DFD’s, all entities, data
stores and processes must be labelled. We must also state any assumptions made from the
text.
LIBRARIA
LO N
N GI
Verific
data
ation
Login
data
Login
ADMIN Login data STUDENT
data LOGIN
Verification Ver. data
ta
data
da
Verificati
on data
da
CHECK/
Login
ta
data
UPDAT assig
nme
E nt
TEACHER
Student
record
Teacher
record
ASSIGNME
NT
Fig.11 ’1’-Level DFD
LEVEL 2 DFD
68
CHECK
& dat
UPDAT a UPDAT
E E
STUDENT
TEACHER RECORD
RECORD
CHEC
K
ER-RDIAGRAM
69
In software engineering, an entity–relationship model (ER model) is a data model for
database such as a relational database. The main components of ER models are entities
However, variants of the idea existed previously and have been devised subsequently
such as super type and subtype data entities and commonality relationships.
INTRODUCTION
define a subject area of business data. It does not define business process; only visualize
business data. The data is represented as components (entities) that are linked with each
other by relationships that express the dependencies and requirements between them,
such as: one building may be divided into zero or more apartments, but one apartment
can only be located in one building. Entities may have various properties (attributes) that
which stores data in tables, every row of each table represents one instance of an entity.
Some data fields in these tables point to indexes in other tables; such pointers are the
The three schema approach to software engineering uses three levels of ER models that
may be developed.
70
CONCEPTUAL DATA MODEL
This is the highest level ER model in that it contains the least granular detail but
establishes the overall scope of what is to be included within the model set. The
conceptual ER model normally defines master reference data entities that are commonly
A conceptual ER model may be used as the foundation for one or more logical data
models (see below). The purpose of the conceptual ER model is then to establish
structural metadata commonality for the master data entities between the set of logical
ER models. The conceptual data model may be used to form commonality relationships
A logical ER model does not require a conceptual ER model, especially if the scope of
the logical ER model includes only the development of a distinct information system.
The logical ER model contains more detail than the conceptual ER model. In addition to
master data entities, operational and transactional data entities are now defined. The
details of each data entity are developed and the relationships between these data entities
One or more physical ER models may be developed from each logical ER model. The
database tables, database indexes such as unique key indexes, and database constraints
normally used to design modifications to the relational database objects and to maintain
the structural metadata of the database. The first stage of information system design uses
these models during the requirements analysis to describe information needs or the type
The data modeling technique can be used to describe any ontology (i.e. an overview and
classifications of used terms and their relationships) for a certain area of interest. In the
case of the design of an information system that is based on a database, the conceptual
data model is, at a later stage (usually called logical design), mapped to a logical data
model, such as the relational model; this in turn is mapped to a physical model during
physical design. Note that sometimes, both of these phases are referred to as "physical
72
ATTRIBUTES
Fig.13 Attributes
KEY ATTRIBUTES
COMPOSITE ATTRIBUTES
MULTIVALVE ATTRIBUTES
RELATIONSHIP
73
Fig.19 Relationship
LINKS
Fig.20 Links
RELATIONSHIP
A1 B1
A2 B2
A3 B3
A1 B1
A2 B2
A3 B3
74
Fig.22 One to Many Relationship
A1 B1
A2 B2
A3 B3
A4
A key is an attribute of a table which helps to identify a row. There can be many different
75
1. SUPER KEY OR CANDIDATE KEY
It is such an attribute of a table that can uniquely identify a row in a table. Generally
they contain unique values and can never contain NULL values. There can be more than
one super key or candidate key in a table e.g. within a STUDENT table Roll and Mobile
2. PRIMARY KEY
It is one of the candidate keys that are chosen to be the identifying key for the entire
table. E.g. although there are two candidate keys in the STUDENT table, the college
3. ALTERNATE KEY
This is the candidate key which is not chosen as the primary key of the table. They are
named so because although not the primary key, they can still identify a row.
4. COMPOSITE KEY
Sometimes one key is not enough to uniquely identify a row. E.g. in a single class Roll is
enough to find a student, but in the entire school, merely searching by the Roll is not
enough, because there could be 10 classes in the school and each one of them may
contain a certain roll no 5. To uniquely identify the student we have to say something
like “class VII, roll no 5”. So, a combination of two or more attributes is combined to
5. FOREIGN KEY
76
Sometimes we may have to work with an attribute that does not have a primary key of its
own. To identify its rows, we have to use the primary attribute of a related table. Such a
Based on the concept of foreign key, there may arise a situation when we have to relate
an entity having a primary key of its own and an entity not having a primary key of its
own. In such a case, the entity having its own primary key is called a strong entity and
the entity not having its own primary key is called a weak entity. Whenever we need to
relate a strong and a weak entity together, the ERD would change just a [Link], for
strong entity having a primary key Roll. But HOME may not have a unique primary key,
as its only attribute Address may be shared by many homes (what if it is a housing
estate?). HOME is a weak entity in this case. The ERD of this statement would be like
the following
ENTITY–RELATIONSHIP MODELLING
77
we speak of an entity, we normally speak of some aspect of the real world that can be
distinguished from other aspects of the real world. An entity is a thing that exists either
physically or logically. An entity may be a physical object such as a house or a car (they
exist physically), an event such as a house sale or a car service, or a concept such as a
customer transaction or order (they exist logically—as a concept). Although the term
entity is the one most commonly used, following Chen we should really distinguish
entity-type. Because the term entity-type is somewhat cumbersome, most people tend to
use the term entity as a synonym for this term. Entities can be thought of as nouns.
captures how entities are related to one another. Relationships can be thought of as verbs,
linking two or more nouns. Examples: an owns relationship between a company and a
declarative database query language ERROL, which mimics natural language, constructs.
ERROL's semantics and implementation are based on reshaped relational algebra (RRA),
a relational algebra that is adapted to the entity–relationship model and captures its
employee entity might have a Social Security Number (SSN) attribute; the proved
Every entity (unless it is a weak entity) must have a minimal set of uniquely identifying
show single entities or single instances of relations. Rather, they show entity sets (all
78
entities of the same entity type) and relationship sets (all relationships of the same
relationship type). Example: a particular song is an entity. The collection of all songs in a
database is an entity set. The eaten relationship between a child and her lunch is a single
79
.
NA NAM
ME BOOK ID PASS
NAM E WOR
LIBRA E MARKS D
RY H
1 A M RECOR
ID RECO S R
M D
EL
RDA M A1
N T
A E
G W
E
1 SUBJEC
IT
ATTEND H M H ID
PASS ANCE M A 1 T
WOR RECORD S RECOR SUBJ
D MOB
D ECT
ILE
NAM NAM
NO
E SEMES
E
TER
DESCRIPTION
This is the Entity Relationship Diagram of proposed system. In this E-R Diagram there
Are Four entity set. This is library record, marks record, attendance record, and subject
80
Record. From each entity set connect its attribute. The library record has a login and it is
1 toleration. can createthe marks and it is 1 to many relations. He can also see attend
Record and it in Many relations. Here the relation ‘create’ and ‘see’ proposed by an
Aggregation. Like this the Subject can also have a login and it is many to much relation.
81