0% found this document useful (0 votes)
27 views50 pages

Major Document Final

The document presents a project titled 'Privacy-Preserving Public Auditing for Shared Cloud Data with Secure Group Management,' submitted by students for their Bachelor of Technology in Computer Science and Engineering. It critiques existing cloud storage auditing techniques and proposes a new secure scheme that addresses vulnerabilities in previous models, particularly against tag and proof forgery attacks. The project includes a detailed analysis of system specifications, implementation modules, and advantages of the proposed system over existing solutions.

Uploaded by

Ajitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views50 pages

Major Document Final

The document presents a project titled 'Privacy-Preserving Public Auditing for Shared Cloud Data with Secure Group Management,' submitted by students for their Bachelor of Technology in Computer Science and Engineering. It critiques existing cloud storage auditing techniques and proposes a new secure scheme that addresses vulnerabilities in previous models, particularly against tag and proof forgery attacks. The project includes a detailed analysis of system specifications, implementation modules, and advantages of the proposed system over existing solutions.

Uploaded by

Ajitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 50

A

Major Project
on
Privacy-Preserving Public Auditing for Shared Cloud Data
with Secure Group Management
submitted
in the partial fulfillment of the requirements for
the award of the degree of
Bachelor of Technology
in
COMPUTER SCIENCE AND ENGINEERING
by
M. Vinay Kumar - 229P5A0512
K. Srinath - 229P5A0505
V. Shiva Kumar Goud - 229P5A0524

Under the supervision of


J. Prashanthi
Assistant Professor
Department of Computer Science and Engineering

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


SREE DATTHA GROUP OF INSTITUTIONS
(Approved by AICTE New Delhi, Accredited by NAAC, Affiliate to JNTUH)
SHERIGUDA (v), IBRAHIMPATNAM (M), RANGAREDDY -501510
2024-2025

i
SREE DATTHA GROUP OF INSTITUTIONS
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

DECLARATION

We are hereby declaring that the project report titled “Privacy-Preserving Public Auditing
for Shared Cloud Data with Secure Group Management” under the guidance of
J.Prashanthi, Sree Dattha Group of Institutions, Ibrahimpatnam is submitted in partial
fulfillment of the requirement for the award of B. Tech. in Computer Science and
Engineering is a record of bonafide work carried out by us and the results embodied in this
project have not been reproduced or copied from any source.

The results embodied in this project report have not been submitted to any other University or
Institute for the award of any Degree or Diploma.

Name of the Students


M. Vinay Kumar 229P5A0512
K. Srinath 229P5A0505
V. Shiva Kumar 229P5A0524

ii
SREE DATTHA GROUP OF INSTITUTIONS
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CERTIFICATE

This is to certify that the project entitled “Privacy-Preserving Public Auditing for Shared
Cloud Data with Secure Group Management” is being submitted by M. Vinay Kumar
(229P5A0512), K. Srinath (229P5A0505), V. Shiva Kumar (229P5A0524) in partial
fulfillment of the requirements for the award of B. Tech in Computer Science and
Engineering to the Jawaharlal Nehru Technological University Hyderabad is a record of
Bonafide work carried out by them under our guidance and supervision during the academic
year 2024-25.

J. Prashanthi Dr. A. Yashwanth Reddy


Internal Guide HOD

External Examiner
The results embodied in this thesis have not been submitted to any other University or
Institute for the award of any degree or diploma.

Submitted for viva Voice Examination held on _______________________

iii
ACKNOWLEDGEMENT

Apart from our efforts, the success of any project depends largely on the encouragement and
guidelines of many others. We take this opportunity to express our gratitude to the people
who have been instrumental in the successful completion of this project.

We would like to express our sincere gratitude to Chairman Sri. G. Panduranga Reddy, and
Vice-Chairman Dr. GNV Vibhav Reddy for providing excellent infrastructure and a nice
atmosphere throughout this project. We are obliged to Dr. M. Senthil Kumar, Principal for
being cooperative throughout this project.

We are also thankful to Dr. A. Yashwanth Reddy, Head of the Department & Professor CSE
Department of Computer Science and Engineering for providing encouragement and support
for completing this project successfully.

We take this opportunity to express my profound gratitude and deep regard of the Internal
guide J. Prashanthi, Assistant Professor for her exemplary guidance, monitoring and
constant encouragement throughout the project work. The blessing, help and guidance given
by her shall carry us a long way in the journey of life on which we are about to embark.

The guidance and support were received from all the members of Sree Dattha Group of
Institutions who contributed to the completion of the project. We are grateful for their
constant support and help.

Finally, we would like to take this opportunity to thank our family for their constant
encouragement, without which this assignment would not be completed. We sincerely
acknowledge and thank all those who gave support directly and indirectly in the completion
of this project.

iv
ABSTRACT

With cloud storage services, users can store their data in the cloud and efficiently access the
data at any time and any location. However, when data are stored in the cloud, there is a risk
of data loss because users lose direct control over their data. To solve this problem, many
cloud storage auditing techniques have been studied. In 2019, Tian et al. proposed a public
auditing scheme for shared data that supports data privacy, identity traceability, and group
dynamics. In this paper, we point out that their scheme is insecure against tag forgery or proof
forgery attacks, which means that, even if the cloud server has deleted some outsourced data,
it can still generate valid proof that the server had accurately stored the data. We then propose
a new

scheme that provides the same functionalities and is secure against the above attacks.
Moreover, we compare the results with other schemes in terms of computation and
communication costs.

v
LIST OF FIGURES

FIG NO TITLE PAGE NO.

5.1 System Architecture 8

5.2.1 Use Case Diagram 9

5.2.2 Class Diagram 10

5.2.3 Sequence Diagram 11

5.2.4 Data Flow Diagram 12

5.2.5 Flow Chart Diagram 13

vi
LIST OF CONTENTS

S. No. CONTENTS PAGE No.

1 Introduction 1

2 System Analysis 3

2.1 Existing System 3

2.2 Proposed System 4

3 System Specification 5

3.1 Hardware Requirements 5

3.2 Software Requirements 5

4 Implementation 6

4.1 Modules 6

4.2 Modular Description 7

5 System Design 8

5.1 System Architecture 8

5.2 UML Diagrams 9

6 Literature Survey 14

7 Software Environment 16

8.1 Java Technology 16

8.2 What can Java Technology do? 18

8.3 How Java Technology changes life 20

8.4 ODBC 21

vii
8.5 JDBC 22

8.6 Networking 23

8 System Study 26

8.1 Feasibility Study 26

8.1.1 Economic feasibility 26

8.1.2 Technical feasibility 27

8.1.3 Social feasibility 27

9 System Test 28

10.1 Types of Tests 29

10 Sample Code 31

11 Screen Shots 39

12 Conclusion 41

13 References 42

viii
CHAPTER 1
INTRODUCTION
1.1 Introduction
Cloud storage provides users with significant storage capacity and advantages such as a cost
reduction, scalability, and convenient access to the stored data. Therefore, cloud storage that
is managed and maintained by professional cloud service providers (CSPs) is widely used by
many enterprises and personal clients. Once the data are stored in cloud storage, the clients
lose direct control over the stored data. Despite this, the CSPs must ensure that the client data
are placed in cloud storage without any modification or substitution. The simplest way to
achieve this is by checking the integrity of the stored data after downloading. When the
capacity of the stored data is large, it is quite inefficient, and thus many methods for verifying
the integrity of the data stored in the cloud without a full download have been proposed.

These techniques are called cloud storage auditing and can be classified into private auditing
and public auditing according to the subject of the integrity verification. In private auditing,
verification is achieved by users who have ownership of the stored data. Public auditing is
conducted by a third-party auditor (TPA) on behalf of the users to reduce their burden, and
thus public auditing schemes are more widely employed for cloud storage auditing. Public
auditing schemes provide various properties depending on the environment, such as privacy
preservation, data dynamics, and shared data. Privacy-preserving auditing is used to conduct
an integrity verification while protecting data information from the TPA, and dynamic data
auditing is where legitimate users are free to add, delete, or change the stored data. Shared
data auditing means freely sharing data within a legitimate user group. In this case, a
legitimate user group should be defined, and user addition and revocation should be carefully
considered. Recently, schemes that satisfy identity traceability, a concept that can trace the
abnormal behavior of legitimate users in shared data auditing, have also been proposed.

Tian et al proposed a scheme that supports privacy preservation, data dynamics, and identity
traceability in shared data auditing. For efficient user enrollment and revocation, the authors
adopted the lazy revocation technique. Moreover, to secure the design against collusion
attacks between the revoked user and server, they apply a technique in which the group
manager manages messages and tag blocks generated by the revoked user to the scheme.

1
Because the lazy-revocation technique is applied to the scheme, even if a user is revoked, no
additional operation occurs until additional changes are made to the block.

In this paper, we show that Tian et al 's scheme is insecure against two types of
attacks, a tag forgery and a proof forgery, and proposed a new scheme that provides the same
functionality and is secure against the above attacks. In this scheme, a tag forgery is possible
by exploiting the vulnerability in which the tag is created in a malleable way, and a proof
forgery is possible by exploiting the secret value being exposed to the server when additional
changes to the block occur after the user is revoked.

2
CHAPTER 2
SYSTEM ANALYSIS
2.1 Existing System
Ateniese et al first introduced a provable data possession scheme called PDP and provided
two provably secure PDP schemes using RSA-based homomorphic authenticators. This
supports public verification with lower communication and computation costs. At the same
time, Juels et al first proposed the concept and a formal security model of proof of
retrievability (POR) and a sentinel-based POR scheme with certain properties. Later,
Shacham et al improved the POR scheme and proposed a new public auditing scheme that
was built from the BLS signature and is secure in the random oracle model. In recent years,
many studies have been conducted on cloud storage auditing, supporting various
functionalities such as data privacy preservation, data dynamics, and shared data.

Erway et al first proposed the PDP scheme using a rank-based authenticated skip list to
support data dynamics. However, the scheme suffers from high computational and
communication costs, and to address this concern, Wang et al proposed a new auditing
scheme employing the Merkle Hash Tree (MHT), which is much simpler.

Although Wang et al proposed a privacy-preserving public auditing scheme, their approach


requires heavy communication and computation costs in the audit and data update process.
Zhu et al also proposed a new scheme using another authenticated data structure, called an
index hash table (IHT), to support data dynamics. Although this scheme succeeded in
reducing the communication and computation costs, it did not resolve the inefficient problem
of lookup and updating operations. Shen et al proposed a new efficient scheme with a doubly
linked information table and location array. Tian et al recently proposed a more efficient
scheme using a dynamic hash table (DHT), which has been proven to be more effective than
IHT for data updating. In terms of data privacy, Wang et al first proposed a privacy-
preserving public auditing scheme to protect data privacy through random masking, and
many schemes for predicting data privacy have been studied.

Wang et al proposed an efficient public auditing scheme called Knox for shared data. The
scheme supports hiding the identity of individual users based on a group signature, but does
not support a user revocation. In Oruta, a ring signature

3
is used to hide the identity of individual users; however,the scheme also has a problem in
that all user keys and block tags must be regenerated to provide a user revocation.

Disadvantages

 An existing system, the system doesn’t have data auditing techniques to find data
verification.
 The system doesn’t have Dynamic Hash tables to maintain the blocks.

2.2 Proposed System

We show that Tian et al.'s scheme is insecure against two types of attacks: tag and proof
forgeries. In tag forgery, we show that an attacker can create a valid tag for the modified
message without knowing any secret values. In the proof forgery, we show that an attacker
can create a valid proof for the given challenged message even if some files stored on the
cloud have been deleted.

We design a new public auditing scheme that is secure against the above attacks and has the
same functionalities, such as privacy preservation, data dynamics, data

sharing, and identity traceability. We changed the tag generation method to eliminate the
malleable property and the data proof generation method to enhance the privacy preservation.
We also changed the lazy revocation process to protect the secret information from the CSP
and proposed an active revocation process to flexibly apply the various environments.

We formally prove the security of the proposed scheme. According to the theorems, the
attacker cannot generate a valid tag and proof without knowing the secret values or the
original messages, respectively. We also provide comparison results with other schemes in
terms of the computation and communication costs.

Advantages

 In the proposed system, to manage the data blocks handled by revoked users, we use
an extended dynamic hash table (EDHT).
 In the proposed system, the modification record table (MRT) is a table in which the
group manager records operations for each block to provide identity traceability and is
a two-dimensional data structure.

4
CHAPTER 3
SYSTEM SPECIFICATION

3.1 Hardware Requirements


System i3, i5, i7
Hard Disk 40 GB
Ram 4 GB

3.2 Software Requirements


Operating system Windows XP or Windows 7, Windows 8
Coding Language Java – AWT, Swings, Networking
Data Base My Sql / MS Access
Documentation MS Office
IDE Eclipse Galileo
Development Kit JDK 1.6

5
CHAPTER 4
IMPLEMENTATION
4.1 Modules
 Data Owner
 Group Manager
 Cloud Server
 Data Consumer (End User)
 Attacker

4.2 Modular Description

Data Owner
In this module, the data owner should register by providing user name, password, email and
group, after registering owner has to Login by using valid user name and password. The Data
owner browses and uploads their data to the cloud server. For the security purpose the data
provider encrypts the data file and then stores in the cloud server via Group Manager. The
Owner is also responsible for uploading metadata to the Third-Party Authenticator (TPA).
The Data owner can have capable of manipulating the encrypted data file.

Group Manager

The Group Manager is a group-based design that interconnects cloud repositories, as shown
in this system. The GM as an interface between client applications and the cloud. The
attribute based access control and proxy re-encryption mechanisms are jointly applied for
authentication and authorization in GM.

Cloud Server

The cloud server is responsible for data storage and file authorization for an end user. The
data file will be stored in cloud server with their tags such as Owner, file name, secret key,
mac and private key, can also view the registered Owners and End-users in the cloud server.
The data file will be sending based on the privileges. If the privilege is correct then the data
will be sent to the corresponding user and also will check the file name, end user name and
secret key. If all are true then it will send to the corresponding user or he will be captured as
attacker.

6
Data Consumer (End User)

The data consumer is nothing but the end user who will request and gets file contents
response from the corresponding cloud servers. If the file name and secret key, access
permission (.java, .txt, .log) is correct then the end is getting the file response from the cloud
or else he will be considered as an attacker and also he will be blocked in corresponding
cloud. If he wants to access the file after blocking he wants to UN block from the cloud.

Attacker

Threat model is one who is integrating the cloud file by adding fake key to the file in the
cloud. The attacker may be within a cloud or from outside the cloud. If attacker is from inside
the cloud then those attackers are called as internal attackers. If the attacker is from outside
the cloud then those attackers are called as external attackers.

7
CHAPTER 5
SYSTEM DESIGN
5.1 System Architecture

Fig: 5.1 System Architecture

5.2 UML Diagrams

UML stands for Unified Modeling Language. UML is a standardized general-purpose


modeling language in the field of object-oriented software engineering. The standard is
managed, and was created by, the Object Management Group.

The goal is for UML to become a common language for creating models of object-oriented
computer software. In its current form UML is comprised of two major components: a Meta-
model and a notation. In the future, some form of method or process may also be added to; or
associated with, UML.

8
The Unified Modeling Language is a standard language for specifying, Visualization,
Constructing and documenting the artifacts of software system, as well as for business
modeling and other non-software systems.

5.2.1 Use-case Diagram

A use case diagram in the Unified Modeling Language (UML) is a type of behavioral
diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical
overview of the functionality provided by a system in terms of actors, their goals (represented
as use cases), and any dependencies between those use cases.

Fig: 5.2.1 Use-case Diagram

9
5.2.2 Class Diagram

In software engineering, a class diagram in the Unified Modeling Language (UML) is a type
of static structure diagram that describes the structure of a system by showing the system's
classes, their attributes, operations (or methods), and the relationships among the classes. It
explains which class contains information.

Fig: 5.2.2 Class Diagram

5.2.3 Sequence Diagram

A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram


that shows how processes operate with one another and in what order. It is a construct of a

10
Message Sequence Chart. Sequence diagrams are sometimes called event diagrams, event
scenarios, and timing diagrams.

Fig: 5.2.3 Sequence Diagram

5.2.4 Data Flow Diagram

A Data Flow Diagram (DFD) is a visual representation of how data moves through a system,
showing the inputs, processes, storage, and outputs. It helps in understanding the flow of
information and the transformation of data within a system.

11
Fig: 5.2.4 Data Flow Diagram

12
5.2.5 Flow Chart

Fig: 5.2.5 Flow Chart Diagram

13
CHAPTER 6
LITERATURE SURVEY

Cloud storage services have become widely adopted due to their cost efficiency, scalability,
and ubiquitous access. However, outsourcing data storage to cloud service providers (CSPs)
introduces data integrity concerns, as users lose direct control over their data. To address this,
public auditing schemes have been proposed that allow third-party auditors (TPAs) to verify
the integrity of data without downloading it.

Ateniese et al. introduced the Provable Data Possession (PDP) model using RSA-based
homomorphic authenticators for public verification with low communication costs. Juels and
Kaliski followed with Proofs of Retrievability (POR) using sentinel-based approaches.
Shacham and Waters advanced this with BLS signatures for compact proofs. These
foundational schemes, however, had limitations in handling dynamic data and preserving user
privacy.

Wang et al. proposed privacy-preserving auditing using random masking techniques, but
their solution imposed high communication overhead. Erway et al. presented dynamic PDPs
with authenticated skip lists, while Zhu et al. used Index Hash Tables (IHTs) for improved
performance. Later, Tian et al. developed a public auditing scheme supporting privacy,
identity traceability, and group dynamics using a dynamic hash table (DHT).

Despite its contributions, Tian et al.'s scheme was shown to be vulnerable to tag and proof
forgery attacks. The use of malleable tag generation and the exposure of revocation
parameters allowed attackers to forge valid proofs even after data deletion.

To address these issues, the authors of the current paper proposed a new public auditing
scheme that:

 Enhances security against tag and proof forgeries.

 Supports privacy preservation, data dynamics, shared data, and identity


traceability.

 Introduces both lazy and active revocation techniques for flexible user revocation.

14
 Uses extended dynamic hash tables (EDHT) and modification record tables
(MRT) for data integrity tracking.

The proposed scheme was formally proven secure under the Computational Diffie-Hellman
(CDH) assumption and demonstrated better resistance to collusion attacks. Additionally, it
was evaluated to have comparable computation and communication costs to prior
schemes, particularly Tian et al.'s, while addressing their key vulnerabilities.

15
CHAPTER 7

SOFTWARE ENVIRONMENT
8.1 Java Technology
Java technology is both a programming language and a platform.

8.1.1 The Java programming language


The Java programming language is a high-level language that can be characterized by all of
the following buzzwords:

 Simple
 Architecture neutral
 Object oriented
 Portable
 Distributed
 High performance
 Interpreted
 Multithreaded
 Robust
 Dynamic
 Secure
With most programming languages, you either compile or interpret a program so that you can
run it on your computer. The Java programming language is unusual in that a program is both
compiled and interpreted. With the compiler, first you translate a program into an
intermediate language called Java byte codes —the platform-independent codes interpreted
by the interpreter on the Java platform. The interpreter parses and runs each Java byte code
instruction on the computer. Compilation happens just once; interpretation occurs each time
the program is executed. The following figure illustrates how this works.

16
You can think of Java byte codes as the machine code instructions for the Java Virtual
Machine (Java VM). Every Java interpreter, whether it’s a development tool or a Web
browser that can run applets, is an implementation of the Java VM. Java byte codes help
make “write once, run anywhere” possible. You can compile your program into byte codes on
any platform that has a Java compiler. The byte codes can then be run on any implementation
of the Java VM. That means that as long as a computer has a Java VM, the same program
written in the Java programming language can run on Windows 2000, a Solaris workstation,
or on an iMac.

8.1.2 The Java platform

A Platform is the hardware or software environment in which a program runs. We’ve


already mentioned some of the most popular platforms like Windows 2000, Linux, Solaris,
and MacOS. Most platforms can be described as a combination of the operating system and
hardware. The Java platform differs from most other platforms in that it’s a software-only
platform that runs on top of other hardware-based platforms.

The Java platform has two components:


 The Java Virtual Machine (Java VM)
 The Java Application Programming Interface (Java API)
You’ve already been introduced to the Java VM. It’s the base for the Java platform and is
ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software components that provide many
useful capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped

17
into libraries of related classes and interfaces; these libraries are known as packages. The
next section, What Can Java Technology Do? Highlights what functionality some of the
packages in the Java API provide.
The following figure depicts a program that’s running on the Java platform. As the figure
shows, the Java API and the virtual machine insulate the program from the hardware.

Native code is code that after you compile it, the compiled code runs on a specific hardware
platform. As a platform-independent environment, the Java platform can be a bit slower than
native code. However, smart compilers, well-tuned interpreters, and just-in-time byte code
compilers can bring performance close to that of native code without threatening portability.

8.2 What can java technology do?


The most common types of programs written in the Java programming language are applets
and applications. If you’ve surfed the Web, you’re probably already familiar with applets.
An applet is a program that adheres to certain conventions that allow it to run within a Java-
enabled browser.

However, the Java programming language is not just for writing cute, entertaining applets for
the Web. The general-purpose, high-level Java programming language is also a powerful
software platform. Using the generous API, you can write many types of programs.
An application is a standalone program that runs directly on the Java platform. A special kind
of application known as a server serves and supports clients on a network. Examples of
servers are Web servers, proxy servers, mail servers, and print servers. Another specialized
program is a servlet. A servlet can almost be thought of as an applet that runs on the server
side. Java Servlets are a popular choice for building interactive web applications, replacing
the use of CGI scripts. Servlets are similar to applets in that they are runtime extensions of
applications. Instead of working in browsers, though, servlets run within Java Web servers,
configuring or tailoring the server.

18
How does the API support all these kinds of programs? It does so with packages of software
components that provides a wide range of functionality. Every full implementation of the
Java platform gives you the following features:
 The essentials: Objects, strings, threads, numbers, input and output, data
structures, system properties, date and time, and so on.
 Applets: The set of conventions used by applets.
 Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data
gram Protocol) sockets, and IP (Internet Protocol) addresses.
 Internationalization: Help for writing programs that can be localized for
users worldwide. Programs can automatically adapt to specific locales and be
displayed in the appropriate language.
 Security: Both low level and high level, including electronic signatures,
public and private key management, access control, and certificates.
 Software components: Known as JavaBeansTM, can plug into existing
component architectures.
 Object serialization: Allows lightweight persistence and communication via
Remote Method Invocation (RMI).
 Java Database Connectivity (JDBCTM): Provides uniform access to a wide
range of relational databases.
The Java platform also has APIs for 2D and 3D graphics, accessibility, servers, collaboration,
telephony, speech, animation, and more. The following figure depicts what is included in the
Java 2 SDK.

19
8.3 How will Java technology change life?
We can’t promise you fame, fortune, or even a job if you learn the Java programming
language. Still, it is likely to make your programs better and requires less effort than other
languages. We believe that Java technology will help you do the following:

 Get started quickly: Although the Java programming language is a powerful


object-oriented language, it’s easy to learn, especially for programmers
already familiar with C or C++.
 Write less code: Comparisons of program metrics (class counts, method
counts, and so on) suggest that a program written in the Java programming
language can be four times smaller than the same program in C++.
 Write better code: The Java programming language encourages good coding
practices, and its garbage collection helps you avoid memory leaks. Its object
orientation, its JavaBeans component architecture, and its wide-ranging, easily
extendible API let you reuse other people’s tested code and introduce fewer
bugs.
 Develop programs more quickly: Your development time may be as much as
twice as fast versus writing the same program in C++. Why? You write fewer
lines of code and it is a simpler programming language than C++.
 Avoid platform dependencies with 100% Pure Java: You can keep your
program portable by avoiding the use of libraries written in other languages.
The 100% Pure JavaTM Product Certification Program has a repository of
historical process manuals, white papers, brochures, and similar materials
online.
 Write once, run anywhere: Because 100% Pure Java programs are compiled
into machine-independent byte codes, they run consistently on any Java
platform.
 Distribute software more easily: You can upgrade applets easily from a
central server. Applets take advantage of the feature of allowing new classes to
be loaded “on the fly,” without recompiling the entire program.

8.4 ODBC
Microsoft Open Database Connectivity (ODBC) is a standard programming interface for
application developers and database systems providers. Before ODBC became a de facto

20
standard for Windows programs to interface with database systems, programmers had to use
proprietary languages for each database they wanted to connect to. Now, ODBC has made the
choice of the database system almost irrelevant from a coding perspective, which is as it
should be. Application developers have much more important things to worry about than the
syntax that is needed to port their program from one database to another when business needs
suddenly change.
Through the ODBC Administrator in Control Panel, you can specify the particular database
that is associated with a data source that an ODBC application program is written to use.
Think of an ODBC data source as a door with a name on it. Each door will lead you to a
particular database. For example, the data source named Sales Figures might be a SQL Server
database, whereas the Accounts Payable data source could refer to an Access database. The
physical database referred to by a data source can reside anywhere on the LAN.
The ODBC system files are not installed on your system by Windows 95. Rather, they are
installed when you setup a separate database application, such as SQL Server Client or Visual
Basic 4.0. When the ODBC icon is installed in Control Panel, it uses a file called
ODBCINST.DLL. It is also possible to administer your ODBC data sources through a stand-
alone program called ODBCADM.EXE. There is a 16-bit and a 32-bit version of this
program and each maintains a separate list of ODBC data sources.

The advantages of this scheme are so numerous that you are probably thinking there must be
some catch. The only disadvantage of ODBC is that it isn’t as efficient as talking directly to
the native database interface. ODBC has had many detractors make the charge that it is too
slow. Microsoft has always claimed that the critical factor in performance is the quality of the
driver software that is used. In our humble opinion, this is true. The availability of good
ODBC drivers has improved a great deal recently. And anyway, the criticism about
performance is somewhat analogous to those who said that compilers would never match the
speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives you the
opportunity to write cleaner programs, which means you finish sooner. Meanwhile,
computers get faster every year.

8.5 JDBC
In an effort to set an independent database standard API for Java; Sun Microsystems
developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access

21
mechanism that provides a consistent interface to a variety of RDBMSs. This consistent
interface is achieved through the use of “plug-in” database connectivity modules, or drivers.
If a database vendor wishes to have JDBC support, he or she must provide the driver for each
platform that the database and Java run on.

To gain a wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As you
discovered earlier in this chapter, ODBC has widespread support on a variety of platforms.
Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than
developing a completely new connectivity solution.

JDBC was announced in March of 1996. It was released for a 90-day public review that
ended June 8, 1996. Because of user input, the final JDBC v1.0 specification was released
soon after.

The remainder of this section will cover enough information about JDBC for you to know
what it is about and how to use it effectively. This is by no means a complete overview of
JDBC. That would fill an entire book. You can think of Java byte codes as the machine code
instructions for the Java Virtual Machine (Java VM). Every Java interpreter, whether it’s a
Java development tool or a Web browser that can run Java applets, is an implementation of
the Java VM. The Java VM can also be implemented in hardware. Java byte codes help make
“write once, run anywhere” possible. You can compile your Java program into byte codes on
my platform that has a Java compiler. The byte codes can then be run any implementation of
the Java VM. For example, the same Java program can run Windows NT, Solaris, and
Macintosh.

8.6 Networking
8.6.1 TCP/IP stack

The TCP/IP stack is shorter than the OSI one:

22
TCP is a connection-oriented protocol; UDP (User Datagram Protocol) is a connectionless
protocol.

8.6.2 IP Datagram

The IP layer provides a connectionless and unreliable delivery system. It considers each
datagram independently of the others. Any association between datagram must be supplied by
the higher layers. The IP layer supplies a checksum that includes its own header. The header
includes the source and destination addresses. The IP layer handles routing through an
Internet. It is also responsible for breaking up large datagram into smaller ones for
transmission and reassembling them at the other end.

8.6.3 UDP

UDP is also connectionless and unreliable. What it adds to IP is a checksum for the contents
of the datagram and port numbers. These are used to give a client/server model - see later.

8.6.4 TCP

TCP supplies logic to give a reliable connection-oriented protocol above IP. It provides a
virtual circuit that two processes can use to communicate.

23
8.6.5 Internet addresses

In order to use a service, you must be able to find it. The Internet uses an address scheme for
machines so that they can be located. The address is a 32 bit integer which gives the IP
address. This encodes a network ID and more addressing. The network ID falls into various
classes according to the size of the network address.

Network address

Class A uses 8 bits for the network address with 24 bits left over for other addressing. Class B
uses 16-bit network addressing. Class C uses 24 bit network addressing and class D uses all
32.

Subnet address

Internally, the UNIX network is divided into sub networks. Building 11 is currently on one
sub network and uses 10-bit addressing, allowing 1024 different hosts.

Host address

8 bits are finally used for host addresses within our subnet. This places a limit of 256
machines that can be on the subnet.

Total address

The 32-bit address is usually written as 4 integers separated by dots.

24
8.6.6 Port addresses

A service exists on a host, and is identified by its port. This is a 16-bit number. To send a
message to a server, you send it to the port for that service of the host that it is running on.
This is not location transparency! Certain of these ports are "well known".

8.6.7 Sockets

A socket is a data structure maintained by the system to handle network connections. A socket
is created using the call socket. It returns an integer that is like a file descriptor. In fact, under
Windows, this handle can be used with Read File and Write File functions.

#include <sys/types.h>

#include <sys/socket.h>

int socket(int family, int type, int protocol);

Here "family" will be AF_INET for IP communications, protocol will be zero, and type will
depend on whether TCP or UDP is used. Two processes wishing to communicate over a
network create a socket each. These are similar to two ends of a pipe - but the actual pipe
does not yet exist.

25
CHAPTER 8

SYSTEM STUDY
8.1 FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business proposal is put forth with
a very general plan for the project and some cost estimates. During system analysis the
feasibility study of the proposed system is to be carried out. This is to ensure that the
proposed system is not a burden to the company. For feasibility analysis, some
understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are

 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY

8.1.1 ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and
development of the system is limited. The expenditures must be justified. Thus the developed
system as well within the budget and this was achieved because most of the technologies
used are freely available. Only the customized products had to be purchased.

8.2.2 TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the technical requirements
of the system. Any system developed must not have a high demand on the available technical
resources. This will lead to high demands on the available technical resources. This will lead
to high demands being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this system.

26
8.1.3 SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must not feel
threatened by the system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about the system
and to make him familiar with it. His level of confidence must be raised so that he is also able
to make some constructive criticism, which is welcomed, as he is the final user of the system.

27
CHAPTER 9

SYSTEM TEST
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality
of components, sub assemblies, assemblies and/or a finished product It is the process of
exercising software with the intent of ensuring that the

Software system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a specific
testing requirement.

9.1 TYPES OF TESTS


Unit testing
Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches
and internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests
perform basic tests at component level and test a specific business process, application,
and/or system configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined inputs and
expected results.

Integration testing
Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of
components is correct and consistent. Integration testing is specifically aimed at exposing
the problems that arise from the combination of components

28
Functional testing
Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user
manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key functions, or


special test cases. In addition, systematic coverage pertaining to identify Business process
flows; data fields, predefined processes, and successive processes must be considered for
testing. Before functional testing is complete, additional tests are identified and the effective
value of current tests is determined.

System Test
System testing ensures that the entire integrated software system meets requirements. It tests
a configuration to ensure known and predictable results. An example of system testing is the
configuration-oriented system integration test. System testing is based on process
descriptions and flows, emphasizing pre-driven process links and integration points.

White Box Testing


White Box Testing is a testing in which in which the software tester has knowledge of the
inner workings, structure and language of the software, or at least its purpose. It is purpose. It
is used to test areas that cannot be reached from a black box level.

29
Black Box Testing
Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests,
must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the
software under test is treated, as a black box. You cannot “see” into it. The test provides
inputs and responds to outputs without considering how the software works.

Test strategy and approach


Field testing will be performed manually and functional tests will be written in detail.

Test objectives

 All field entries must work properly.


 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.
Features to be tested

1. Verify that the entries are of the correct format


2. No duplicate entries should be allowed
3. All links should take the user to the correct page.
Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.

Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

30
CHAPTER 11

SAMPLE CODE
AES.java

import java.security.Key;

import javax.crypto.Cipher;

import javax.crypto.spec.SecretKeySpec;

import org.bouncycastle.util.encoders.Base64;

public class AES

private static final String ALGO = "AES";

public static String encrypt(String Data, String keyWord) throws Exception {

System.out.println("data at Encrypt : " + Data);

keyWord = keyWord.substring(0, 16);

byte[] keyValue = keyWord.getBytes();

System.out.println("Size : " + keyValue.length);

Key key = new SecretKeySpec(keyValue, ALGO);

Cipher c = Cipher.getInstance(ALGO);

c.init(Cipher.ENCRYPT_MODE, key);

String encryptedValue = new String(Base64.encode(Data.getBytes()));

System.out.println("Encrypted value : " + encryptedValue);

return encryptedValue;

31
}

public static String decrypt(String encryptedData, String keyWord)

throws Exception {

keyWord = keyWord.substring(0, 16);

byte[] keyValue = keyWord.getBytes();

Key key = new SecretKeySpec(keyValue, ALGO);

Cipher c = Cipher.getInstance(ALGO);

c.init(Cipher.DECRYPT_MODE, key);

String decryptedValue = new String(Base64.decode(encryptedData

.getBytes()));

return decryptedValue; }

public static void main(String[] args) {

String password = "mypassword";

String keyWord = "ef50a0ef2c3e3a5fdf803ae9752c8c66";

try {

String passwordEnc = AES.encrypt(password, keyWord);

String passwordDec = AES.decrypt(passwordEnc, keyWord);

System.out.println("Plain Text : " + password);

System.out.println("Encrypted Text : " + passwordEnc);

System.out.println("Decrypted Text : " + passwordDec);

catch (Exception e) {

System.out.println("Opps,Exception In AES_EncrypterNdecrypter=>main() :");

32
e.printStackTrace(); }}

Attacker.java

import java.awt.Color;

import java.awt.Font;

import java.awt.event.ActionEvent;

import java.awt.event.ActionListener;

import java.io.DataInputStream;

import java.io.DataOutputStream;

import java.net.InetAddress;

import java.net.ServerSocket;

import java.net.Socket;

import java.sql.Connection;

import java.sql.DriverManager;

import java.sql.ResultSet;

import java.sql.ResultSetMetaData;

import java.sql.Statement;

import java.util.Vector;

import javax.swing.*;

public class Attacker implements ActionListener {

JFrame f;

JPanel p;
33
JLabel l1,l2,l3;

JButton b1,b2;

ImageIcon ic;

JTextField tc;

public Font f1 = new Font("Times new Roman", Font.BOLD, 17);

public JTextArea tf = new JTextArea();

public JTextField fname = new JTextField();

public JScrollPane pane1 = new JScrollPane();

public Attacker() {

f=new JFrame("Attacker::Privacy Preserving Public Auditing for Shared


Cloud Data With Secure Group Management");

p=new JPanel();

p.setBackground(new Color(100, 220, 235));

f.setSize(500, 650);

f.setVisible(true);

p.setLayout(null);

b1=new JButton("View CloudFiles");

b1.setBounds(230, 360, 150, 30);

f.add(p);

b2=new JButton("Attack");

b2.setBounds(200, 490, 100, 30);

34
p.add(b2);

tc=new JTextField();

tc.setBounds(160, 250, 250, 200);

p.add(tc);

l1=new JLabel("Enter New Key");

l1.setBounds(30, 330, 150, 30);

l1.setFont(f1);

p.add(l1);

tf.setColumns(200);

tf.setRows(100);

tf.setName("tf");

pane1.setName("pane");

pane1.setViewportView(tf);

pane1.setBounds(450, 250, 300, 200);

b1.addActionListener(this);

b2.addActionListener(this);

int[] port = new int[] { 401, 1006,201};

for (int i = 0; i < 3; i++) {

Thread th = new Thread(new PortListener(port[i]));

th.start(); }}

35
public static void main(String[] args) {

new Attacker(); }

class PortListener implements Runnable {

DataOutputStream dos = null;

DataInputStream in = null;

ServerSocket server;

Socket connection;

int i;

String fileid;

Connection con;

Statement stmt;

int port;

public PortListener(int port) {

this.port = port; }

public void run() {

if(this.port==1006) {

}else

if(this.port==201)

{}

}}

36
@Override

public void actionPerformed(ActionEvent ae) {

if(ae.getSource()==b2){

try {

InetAddress ia = InetAddress.getLocalHost();

String ip2= ia.getHostAddress()

String file=JOptionPane.showInputDialog("Enter File name");

String name=JOptionPane.showInputDialog("Enter Your name");

String pro=JOptionPane.showInputDialog("Enter GM server IP


Address");

Socket s=new Socket(pro,2007);

DataOutputStream dos=new DataOutputStream(s.getOutputStream());

dos.writeUTF(tc.getText());

dos.writeUTF(file);

dos.writeUTF(ip2);

dos.writeUTF(name);

DataInputStream diss=new DataInputStream(s.getInputStream());

String msg=diss.readUTF();

System.out.println(""+ msg);

if(msg.equals("Attcker")){

37
JOptionPane.showMessageDialog(null,"Server Audited and will not
allow....You are an Attacker!!!!"); }

if(msg.equals("found")) {

JOptionPane.showMessageDialog(null,"You are Currently Revoked by


cloud server"); }}

catch (Exception e) {

// TODO: handle exception

38
CHAPTER 12
SCREEN SHOTS

39
CHAPTER 12
CONCLUSION

Cloud storage auditing is an extremely important technique for resolving the problem of
ensuring the integrity of stored data in cloud storage. Because the need for the concept is
shared, many schemes providing different functions and security levels have been proposed.
In 2019, Tian et al. proposed a scheme that supports data privacy, identity traceability, and
group dynamics and claimed that their scheme is secure against collusion attacks between the
CSPs and revoked users. In this paper, we showed in their scheme that a tag can be forged
from a valid message and tag pair without knowing any secret values. We also showed that a
proof can be forged by a collusion attack, even if some challenged messages have been
deleted. We then proposed a new scheme that is secure against the above attacks while
providing the same functionality as their approach. We also provided formal security proofs
and an analysis of the computation costs of
both schemes.

40
CHAPTER 13
REFERENCES

 (Apr. 2021). Cloud Storage-Global Market Trajectory and Analyt-ics. [Online].


Available: https://www.researchandmarkets.com/reports/5140992/cloud-storage-
global-market-trajectory

 G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z. Peterson, and D. Song,


``Provable data possession at untrusted stores,'' in Proc. 14th ACM Conf. Comput.
Commun. Secur. (CCS), 2007, pp. 598_609.

 Juels and B. S. Kaliski, ``PORs: Proofs of retrievability for large _les,'' in Proc. 14th
ACM Conf. Comput. Commun. Secur. (CCS), Oct. 2007, pp. 584_597.

 H. Shacham and B. Waters, ``Compact proofs of retrievability,'' in Proc. Int. Conf.


Theory Appl. Cryptol. Inf. Secur. Berlin, Germany: Springer, 2008, pp. 90_107.

41
 C. Wang, Q. Wang, K. Ren, and W. Lou, ``Privacy-preserving public auditing for data
storage security in cloud computing,'' in Proc. IEEE INFOCOM, Mar. 2010, pp. 1_9.

 Z. Hao, S. Zhong, and N. Yu, ``A privacy-preserving remote data integrity checking
protocol with data dynamics and public veri_ability,'' IEEE Trans. Knowl. Data Eng.,
vol. 23, no. 9, pp. 1432_1437, Sep. 2011.

 K. Yang and X. Jia, ``An ef_cient and secure dynamic auditing protocol for data
storage in cloud computing,'' IEEE Trans. Parallel Distrib. Syst., vol. 24, no. 9, pp.
1717_1726, Sep. 2013.

 C. Wang, S. S. M. Chow, Q. Wang, K. Ren, and W. Lou, ``Privacy-preserving public


auditing for secure cloud storage,'' IEEE Trans. Comput., vol. 62, no. 2, pp. 362_375,
Feb. 2013.

42

You might also like