Major Document Final
Major Document Final
Major Project
on
Privacy-Preserving Public Auditing for Shared Cloud Data
with Secure Group Management
submitted
in the partial fulfillment of the requirements for
the award of the degree of
Bachelor of Technology
in
COMPUTER SCIENCE AND ENGINEERING
by
M. Vinay Kumar - 229P5A0512
K. Srinath - 229P5A0505
V. Shiva Kumar Goud - 229P5A0524
i
SREE DATTHA GROUP OF INSTITUTIONS
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
DECLARATION
We are hereby declaring that the project report titled “Privacy-Preserving Public Auditing
for Shared Cloud Data with Secure Group Management” under the guidance of
J.Prashanthi, Sree Dattha Group of Institutions, Ibrahimpatnam is submitted in partial
fulfillment of the requirement for the award of B. Tech. in Computer Science and
Engineering is a record of bonafide work carried out by us and the results embodied in this
project have not been reproduced or copied from any source.
The results embodied in this project report have not been submitted to any other University or
Institute for the award of any Degree or Diploma.
ii
SREE DATTHA GROUP OF INSTITUTIONS
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
CERTIFICATE
This is to certify that the project entitled “Privacy-Preserving Public Auditing for Shared
Cloud Data with Secure Group Management” is being submitted by M. Vinay Kumar
(229P5A0512), K. Srinath (229P5A0505), V. Shiva Kumar (229P5A0524) in partial
fulfillment of the requirements for the award of B. Tech in Computer Science and
Engineering to the Jawaharlal Nehru Technological University Hyderabad is a record of
Bonafide work carried out by them under our guidance and supervision during the academic
year 2024-25.
External Examiner
The results embodied in this thesis have not been submitted to any other University or
Institute for the award of any degree or diploma.
iii
ACKNOWLEDGEMENT
Apart from our efforts, the success of any project depends largely on the encouragement and
guidelines of many others. We take this opportunity to express our gratitude to the people
who have been instrumental in the successful completion of this project.
We would like to express our sincere gratitude to Chairman Sri. G. Panduranga Reddy, and
Vice-Chairman Dr. GNV Vibhav Reddy for providing excellent infrastructure and a nice
atmosphere throughout this project. We are obliged to Dr. M. Senthil Kumar, Principal for
being cooperative throughout this project.
We are also thankful to Dr. A. Yashwanth Reddy, Head of the Department & Professor CSE
Department of Computer Science and Engineering for providing encouragement and support
for completing this project successfully.
We take this opportunity to express my profound gratitude and deep regard of the Internal
guide J. Prashanthi, Assistant Professor for her exemplary guidance, monitoring and
constant encouragement throughout the project work. The blessing, help and guidance given
by her shall carry us a long way in the journey of life on which we are about to embark.
The guidance and support were received from all the members of Sree Dattha Group of
Institutions who contributed to the completion of the project. We are grateful for their
constant support and help.
Finally, we would like to take this opportunity to thank our family for their constant
encouragement, without which this assignment would not be completed. We sincerely
acknowledge and thank all those who gave support directly and indirectly in the completion
of this project.
iv
ABSTRACT
With cloud storage services, users can store their data in the cloud and efficiently access the
data at any time and any location. However, when data are stored in the cloud, there is a risk
of data loss because users lose direct control over their data. To solve this problem, many
cloud storage auditing techniques have been studied. In 2019, Tian et al. proposed a public
auditing scheme for shared data that supports data privacy, identity traceability, and group
dynamics. In this paper, we point out that their scheme is insecure against tag forgery or proof
forgery attacks, which means that, even if the cloud server has deleted some outsourced data,
it can still generate valid proof that the server had accurately stored the data. We then propose
a new
scheme that provides the same functionalities and is secure against the above attacks.
Moreover, we compare the results with other schemes in terms of computation and
communication costs.
v
LIST OF FIGURES
vi
LIST OF CONTENTS
1 Introduction 1
2 System Analysis 3
3 System Specification 5
4 Implementation 6
4.1 Modules 6
5 System Design 8
6 Literature Survey 14
7 Software Environment 16
8.4 ODBC 21
vii
8.5 JDBC 22
8.6 Networking 23
8 System Study 26
9 System Test 28
10 Sample Code 31
11 Screen Shots 39
12 Conclusion 41
13 References 42
viii
CHAPTER 1
INTRODUCTION
1.1 Introduction
Cloud storage provides users with significant storage capacity and advantages such as a cost
reduction, scalability, and convenient access to the stored data. Therefore, cloud storage that
is managed and maintained by professional cloud service providers (CSPs) is widely used by
many enterprises and personal clients. Once the data are stored in cloud storage, the clients
lose direct control over the stored data. Despite this, the CSPs must ensure that the client data
are placed in cloud storage without any modification or substitution. The simplest way to
achieve this is by checking the integrity of the stored data after downloading. When the
capacity of the stored data is large, it is quite inefficient, and thus many methods for verifying
the integrity of the data stored in the cloud without a full download have been proposed.
These techniques are called cloud storage auditing and can be classified into private auditing
and public auditing according to the subject of the integrity verification. In private auditing,
verification is achieved by users who have ownership of the stored data. Public auditing is
conducted by a third-party auditor (TPA) on behalf of the users to reduce their burden, and
thus public auditing schemes are more widely employed for cloud storage auditing. Public
auditing schemes provide various properties depending on the environment, such as privacy
preservation, data dynamics, and shared data. Privacy-preserving auditing is used to conduct
an integrity verification while protecting data information from the TPA, and dynamic data
auditing is where legitimate users are free to add, delete, or change the stored data. Shared
data auditing means freely sharing data within a legitimate user group. In this case, a
legitimate user group should be defined, and user addition and revocation should be carefully
considered. Recently, schemes that satisfy identity traceability, a concept that can trace the
abnormal behavior of legitimate users in shared data auditing, have also been proposed.
Tian et al proposed a scheme that supports privacy preservation, data dynamics, and identity
traceability in shared data auditing. For efficient user enrollment and revocation, the authors
adopted the lazy revocation technique. Moreover, to secure the design against collusion
attacks between the revoked user and server, they apply a technique in which the group
manager manages messages and tag blocks generated by the revoked user to the scheme.
1
Because the lazy-revocation technique is applied to the scheme, even if a user is revoked, no
additional operation occurs until additional changes are made to the block.
In this paper, we show that Tian et al 's scheme is insecure against two types of
attacks, a tag forgery and a proof forgery, and proposed a new scheme that provides the same
functionality and is secure against the above attacks. In this scheme, a tag forgery is possible
by exploiting the vulnerability in which the tag is created in a malleable way, and a proof
forgery is possible by exploiting the secret value being exposed to the server when additional
changes to the block occur after the user is revoked.
2
CHAPTER 2
SYSTEM ANALYSIS
2.1 Existing System
Ateniese et al first introduced a provable data possession scheme called PDP and provided
two provably secure PDP schemes using RSA-based homomorphic authenticators. This
supports public verification with lower communication and computation costs. At the same
time, Juels et al first proposed the concept and a formal security model of proof of
retrievability (POR) and a sentinel-based POR scheme with certain properties. Later,
Shacham et al improved the POR scheme and proposed a new public auditing scheme that
was built from the BLS signature and is secure in the random oracle model. In recent years,
many studies have been conducted on cloud storage auditing, supporting various
functionalities such as data privacy preservation, data dynamics, and shared data.
Erway et al first proposed the PDP scheme using a rank-based authenticated skip list to
support data dynamics. However, the scheme suffers from high computational and
communication costs, and to address this concern, Wang et al proposed a new auditing
scheme employing the Merkle Hash Tree (MHT), which is much simpler.
Wang et al proposed an efficient public auditing scheme called Knox for shared data. The
scheme supports hiding the identity of individual users based on a group signature, but does
not support a user revocation. In Oruta, a ring signature
3
is used to hide the identity of individual users; however,the scheme also has a problem in
that all user keys and block tags must be regenerated to provide a user revocation.
Disadvantages
An existing system, the system doesn’t have data auditing techniques to find data
verification.
The system doesn’t have Dynamic Hash tables to maintain the blocks.
We show that Tian et al.'s scheme is insecure against two types of attacks: tag and proof
forgeries. In tag forgery, we show that an attacker can create a valid tag for the modified
message without knowing any secret values. In the proof forgery, we show that an attacker
can create a valid proof for the given challenged message even if some files stored on the
cloud have been deleted.
We design a new public auditing scheme that is secure against the above attacks and has the
same functionalities, such as privacy preservation, data dynamics, data
sharing, and identity traceability. We changed the tag generation method to eliminate the
malleable property and the data proof generation method to enhance the privacy preservation.
We also changed the lazy revocation process to protect the secret information from the CSP
and proposed an active revocation process to flexibly apply the various environments.
We formally prove the security of the proposed scheme. According to the theorems, the
attacker cannot generate a valid tag and proof without knowing the secret values or the
original messages, respectively. We also provide comparison results with other schemes in
terms of the computation and communication costs.
Advantages
In the proposed system, to manage the data blocks handled by revoked users, we use
an extended dynamic hash table (EDHT).
In the proposed system, the modification record table (MRT) is a table in which the
group manager records operations for each block to provide identity traceability and is
a two-dimensional data structure.
4
CHAPTER 3
SYSTEM SPECIFICATION
5
CHAPTER 4
IMPLEMENTATION
4.1 Modules
Data Owner
Group Manager
Cloud Server
Data Consumer (End User)
Attacker
Data Owner
In this module, the data owner should register by providing user name, password, email and
group, after registering owner has to Login by using valid user name and password. The Data
owner browses and uploads their data to the cloud server. For the security purpose the data
provider encrypts the data file and then stores in the cloud server via Group Manager. The
Owner is also responsible for uploading metadata to the Third-Party Authenticator (TPA).
The Data owner can have capable of manipulating the encrypted data file.
Group Manager
The Group Manager is a group-based design that interconnects cloud repositories, as shown
in this system. The GM as an interface between client applications and the cloud. The
attribute based access control and proxy re-encryption mechanisms are jointly applied for
authentication and authorization in GM.
Cloud Server
The cloud server is responsible for data storage and file authorization for an end user. The
data file will be stored in cloud server with their tags such as Owner, file name, secret key,
mac and private key, can also view the registered Owners and End-users in the cloud server.
The data file will be sending based on the privileges. If the privilege is correct then the data
will be sent to the corresponding user and also will check the file name, end user name and
secret key. If all are true then it will send to the corresponding user or he will be captured as
attacker.
6
Data Consumer (End User)
The data consumer is nothing but the end user who will request and gets file contents
response from the corresponding cloud servers. If the file name and secret key, access
permission (.java, .txt, .log) is correct then the end is getting the file response from the cloud
or else he will be considered as an attacker and also he will be blocked in corresponding
cloud. If he wants to access the file after blocking he wants to UN block from the cloud.
Attacker
Threat model is one who is integrating the cloud file by adding fake key to the file in the
cloud. The attacker may be within a cloud or from outside the cloud. If attacker is from inside
the cloud then those attackers are called as internal attackers. If the attacker is from outside
the cloud then those attackers are called as external attackers.
7
CHAPTER 5
SYSTEM DESIGN
5.1 System Architecture
The goal is for UML to become a common language for creating models of object-oriented
computer software. In its current form UML is comprised of two major components: a Meta-
model and a notation. In the future, some form of method or process may also be added to; or
associated with, UML.
8
The Unified Modeling Language is a standard language for specifying, Visualization,
Constructing and documenting the artifacts of software system, as well as for business
modeling and other non-software systems.
A use case diagram in the Unified Modeling Language (UML) is a type of behavioral
diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical
overview of the functionality provided by a system in terms of actors, their goals (represented
as use cases), and any dependencies between those use cases.
9
5.2.2 Class Diagram
In software engineering, a class diagram in the Unified Modeling Language (UML) is a type
of static structure diagram that describes the structure of a system by showing the system's
classes, their attributes, operations (or methods), and the relationships among the classes. It
explains which class contains information.
10
Message Sequence Chart. Sequence diagrams are sometimes called event diagrams, event
scenarios, and timing diagrams.
A Data Flow Diagram (DFD) is a visual representation of how data moves through a system,
showing the inputs, processes, storage, and outputs. It helps in understanding the flow of
information and the transformation of data within a system.
11
Fig: 5.2.4 Data Flow Diagram
12
5.2.5 Flow Chart
13
CHAPTER 6
LITERATURE SURVEY
Cloud storage services have become widely adopted due to their cost efficiency, scalability,
and ubiquitous access. However, outsourcing data storage to cloud service providers (CSPs)
introduces data integrity concerns, as users lose direct control over their data. To address this,
public auditing schemes have been proposed that allow third-party auditors (TPAs) to verify
the integrity of data without downloading it.
Ateniese et al. introduced the Provable Data Possession (PDP) model using RSA-based
homomorphic authenticators for public verification with low communication costs. Juels and
Kaliski followed with Proofs of Retrievability (POR) using sentinel-based approaches.
Shacham and Waters advanced this with BLS signatures for compact proofs. These
foundational schemes, however, had limitations in handling dynamic data and preserving user
privacy.
Wang et al. proposed privacy-preserving auditing using random masking techniques, but
their solution imposed high communication overhead. Erway et al. presented dynamic PDPs
with authenticated skip lists, while Zhu et al. used Index Hash Tables (IHTs) for improved
performance. Later, Tian et al. developed a public auditing scheme supporting privacy,
identity traceability, and group dynamics using a dynamic hash table (DHT).
Despite its contributions, Tian et al.'s scheme was shown to be vulnerable to tag and proof
forgery attacks. The use of malleable tag generation and the exposure of revocation
parameters allowed attackers to forge valid proofs even after data deletion.
To address these issues, the authors of the current paper proposed a new public auditing
scheme that:
Introduces both lazy and active revocation techniques for flexible user revocation.
14
Uses extended dynamic hash tables (EDHT) and modification record tables
(MRT) for data integrity tracking.
The proposed scheme was formally proven secure under the Computational Diffie-Hellman
(CDH) assumption and demonstrated better resistance to collusion attacks. Additionally, it
was evaluated to have comparable computation and communication costs to prior
schemes, particularly Tian et al.'s, while addressing their key vulnerabilities.
15
CHAPTER 7
SOFTWARE ENVIRONMENT
8.1 Java Technology
Java technology is both a programming language and a platform.
Simple
Architecture neutral
Object oriented
Portable
Distributed
High performance
Interpreted
Multithreaded
Robust
Dynamic
Secure
With most programming languages, you either compile or interpret a program so that you can
run it on your computer. The Java programming language is unusual in that a program is both
compiled and interpreted. With the compiler, first you translate a program into an
intermediate language called Java byte codes —the platform-independent codes interpreted
by the interpreter on the Java platform. The interpreter parses and runs each Java byte code
instruction on the computer. Compilation happens just once; interpretation occurs each time
the program is executed. The following figure illustrates how this works.
16
You can think of Java byte codes as the machine code instructions for the Java Virtual
Machine (Java VM). Every Java interpreter, whether it’s a development tool or a Web
browser that can run applets, is an implementation of the Java VM. Java byte codes help
make “write once, run anywhere” possible. You can compile your program into byte codes on
any platform that has a Java compiler. The byte codes can then be run on any implementation
of the Java VM. That means that as long as a computer has a Java VM, the same program
written in the Java programming language can run on Windows 2000, a Solaris workstation,
or on an iMac.
The Java API is a large collection of ready-made software components that provide many
useful capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped
17
into libraries of related classes and interfaces; these libraries are known as packages. The
next section, What Can Java Technology Do? Highlights what functionality some of the
packages in the Java API provide.
The following figure depicts a program that’s running on the Java platform. As the figure
shows, the Java API and the virtual machine insulate the program from the hardware.
Native code is code that after you compile it, the compiled code runs on a specific hardware
platform. As a platform-independent environment, the Java platform can be a bit slower than
native code. However, smart compilers, well-tuned interpreters, and just-in-time byte code
compilers can bring performance close to that of native code without threatening portability.
However, the Java programming language is not just for writing cute, entertaining applets for
the Web. The general-purpose, high-level Java programming language is also a powerful
software platform. Using the generous API, you can write many types of programs.
An application is a standalone program that runs directly on the Java platform. A special kind
of application known as a server serves and supports clients on a network. Examples of
servers are Web servers, proxy servers, mail servers, and print servers. Another specialized
program is a servlet. A servlet can almost be thought of as an applet that runs on the server
side. Java Servlets are a popular choice for building interactive web applications, replacing
the use of CGI scripts. Servlets are similar to applets in that they are runtime extensions of
applications. Instead of working in browsers, though, servlets run within Java Web servers,
configuring or tailoring the server.
18
How does the API support all these kinds of programs? It does so with packages of software
components that provides a wide range of functionality. Every full implementation of the
Java platform gives you the following features:
The essentials: Objects, strings, threads, numbers, input and output, data
structures, system properties, date and time, and so on.
Applets: The set of conventions used by applets.
Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data
gram Protocol) sockets, and IP (Internet Protocol) addresses.
Internationalization: Help for writing programs that can be localized for
users worldwide. Programs can automatically adapt to specific locales and be
displayed in the appropriate language.
Security: Both low level and high level, including electronic signatures,
public and private key management, access control, and certificates.
Software components: Known as JavaBeansTM, can plug into existing
component architectures.
Object serialization: Allows lightweight persistence and communication via
Remote Method Invocation (RMI).
Java Database Connectivity (JDBCTM): Provides uniform access to a wide
range of relational databases.
The Java platform also has APIs for 2D and 3D graphics, accessibility, servers, collaboration,
telephony, speech, animation, and more. The following figure depicts what is included in the
Java 2 SDK.
19
8.3 How will Java technology change life?
We can’t promise you fame, fortune, or even a job if you learn the Java programming
language. Still, it is likely to make your programs better and requires less effort than other
languages. We believe that Java technology will help you do the following:
8.4 ODBC
Microsoft Open Database Connectivity (ODBC) is a standard programming interface for
application developers and database systems providers. Before ODBC became a de facto
20
standard for Windows programs to interface with database systems, programmers had to use
proprietary languages for each database they wanted to connect to. Now, ODBC has made the
choice of the database system almost irrelevant from a coding perspective, which is as it
should be. Application developers have much more important things to worry about than the
syntax that is needed to port their program from one database to another when business needs
suddenly change.
Through the ODBC Administrator in Control Panel, you can specify the particular database
that is associated with a data source that an ODBC application program is written to use.
Think of an ODBC data source as a door with a name on it. Each door will lead you to a
particular database. For example, the data source named Sales Figures might be a SQL Server
database, whereas the Accounts Payable data source could refer to an Access database. The
physical database referred to by a data source can reside anywhere on the LAN.
The ODBC system files are not installed on your system by Windows 95. Rather, they are
installed when you setup a separate database application, such as SQL Server Client or Visual
Basic 4.0. When the ODBC icon is installed in Control Panel, it uses a file called
ODBCINST.DLL. It is also possible to administer your ODBC data sources through a stand-
alone program called ODBCADM.EXE. There is a 16-bit and a 32-bit version of this
program and each maintains a separate list of ODBC data sources.
The advantages of this scheme are so numerous that you are probably thinking there must be
some catch. The only disadvantage of ODBC is that it isn’t as efficient as talking directly to
the native database interface. ODBC has had many detractors make the charge that it is too
slow. Microsoft has always claimed that the critical factor in performance is the quality of the
driver software that is used. In our humble opinion, this is true. The availability of good
ODBC drivers has improved a great deal recently. And anyway, the criticism about
performance is somewhat analogous to those who said that compilers would never match the
speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives you the
opportunity to write cleaner programs, which means you finish sooner. Meanwhile,
computers get faster every year.
8.5 JDBC
In an effort to set an independent database standard API for Java; Sun Microsystems
developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access
21
mechanism that provides a consistent interface to a variety of RDBMSs. This consistent
interface is achieved through the use of “plug-in” database connectivity modules, or drivers.
If a database vendor wishes to have JDBC support, he or she must provide the driver for each
platform that the database and Java run on.
To gain a wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As you
discovered earlier in this chapter, ODBC has widespread support on a variety of platforms.
Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than
developing a completely new connectivity solution.
JDBC was announced in March of 1996. It was released for a 90-day public review that
ended June 8, 1996. Because of user input, the final JDBC v1.0 specification was released
soon after.
The remainder of this section will cover enough information about JDBC for you to know
what it is about and how to use it effectively. This is by no means a complete overview of
JDBC. That would fill an entire book. You can think of Java byte codes as the machine code
instructions for the Java Virtual Machine (Java VM). Every Java interpreter, whether it’s a
Java development tool or a Web browser that can run Java applets, is an implementation of
the Java VM. The Java VM can also be implemented in hardware. Java byte codes help make
“write once, run anywhere” possible. You can compile your Java program into byte codes on
my platform that has a Java compiler. The byte codes can then be run any implementation of
the Java VM. For example, the same Java program can run Windows NT, Solaris, and
Macintosh.
8.6 Networking
8.6.1 TCP/IP stack
22
TCP is a connection-oriented protocol; UDP (User Datagram Protocol) is a connectionless
protocol.
8.6.2 IP Datagram
The IP layer provides a connectionless and unreliable delivery system. It considers each
datagram independently of the others. Any association between datagram must be supplied by
the higher layers. The IP layer supplies a checksum that includes its own header. The header
includes the source and destination addresses. The IP layer handles routing through an
Internet. It is also responsible for breaking up large datagram into smaller ones for
transmission and reassembling them at the other end.
8.6.3 UDP
UDP is also connectionless and unreliable. What it adds to IP is a checksum for the contents
of the datagram and port numbers. These are used to give a client/server model - see later.
8.6.4 TCP
TCP supplies logic to give a reliable connection-oriented protocol above IP. It provides a
virtual circuit that two processes can use to communicate.
23
8.6.5 Internet addresses
In order to use a service, you must be able to find it. The Internet uses an address scheme for
machines so that they can be located. The address is a 32 bit integer which gives the IP
address. This encodes a network ID and more addressing. The network ID falls into various
classes according to the size of the network address.
Network address
Class A uses 8 bits for the network address with 24 bits left over for other addressing. Class B
uses 16-bit network addressing. Class C uses 24 bit network addressing and class D uses all
32.
Subnet address
Internally, the UNIX network is divided into sub networks. Building 11 is currently on one
sub network and uses 10-bit addressing, allowing 1024 different hosts.
Host address
8 bits are finally used for host addresses within our subnet. This places a limit of 256
machines that can be on the subnet.
Total address
24
8.6.6 Port addresses
A service exists on a host, and is identified by its port. This is a 16-bit number. To send a
message to a server, you send it to the port for that service of the host that it is running on.
This is not location transparency! Certain of these ports are "well known".
8.6.7 Sockets
A socket is a data structure maintained by the system to handle network connections. A socket
is created using the call socket. It returns an integer that is like a file descriptor. In fact, under
Windows, this handle can be used with Read File and Write File functions.
#include <sys/types.h>
#include <sys/socket.h>
Here "family" will be AF_INET for IP communications, protocol will be zero, and type will
depend on whether TCP or UDP is used. Two processes wishing to communicate over a
network create a socket each. These are similar to two ends of a pipe - but the actual pipe
does not yet exist.
25
CHAPTER 8
SYSTEM STUDY
8.1 FEASIBILITY STUDY
The feasibility of the project is analyzed in this phase and business proposal is put forth with
a very general plan for the project and some cost estimates. During system analysis the
feasibility study of the proposed system is to be carried out. This is to ensure that the
proposed system is not a burden to the company. For feasibility analysis, some
understanding of the major requirements for the system is essential.
ECONOMICAL FEASIBILITY
TECHNICAL FEASIBILITY
SOCIAL FEASIBILITY
This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and
development of the system is limited. The expenditures must be justified. Thus the developed
system as well within the budget and this was achieved because most of the technologies
used are freely available. Only the customized products had to be purchased.
This study is carried out to check the technical feasibility, that is, the technical requirements
of the system. Any system developed must not have a high demand on the available technical
resources. This will lead to high demands on the available technical resources. This will lead
to high demands being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this system.
26
8.1.3 SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must not feel
threatened by the system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about the system
and to make him familiar with it. His level of confidence must be raised so that he is also able
to make some constructive criticism, which is welcomed, as he is the final user of the system.
27
CHAPTER 9
SYSTEM TEST
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality
of components, sub assemblies, assemblies and/or a finished product It is the process of
exercising software with the intent of ensuring that the
Software system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a specific
testing requirement.
Integration testing
Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of
components is correct and consistent. Integration testing is specifically aimed at exposing
the problems that arise from the combination of components
28
Functional testing
Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user
manuals.
System Test
System testing ensures that the entire integrated software system meets requirements. It tests
a configuration to ensure known and predictable results. An example of system testing is the
configuration-oriented system integration test. System testing is based on process
descriptions and flows, emphasizing pre-driven process links and integration points.
29
Black Box Testing
Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests,
must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the
software under test is treated, as a black box. You cannot “see” into it. The test provides
inputs and responds to outputs without considering how the software works.
Test objectives
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.
30
CHAPTER 11
SAMPLE CODE
AES.java
import java.security.Key;
import javax.crypto.Cipher;
import javax.crypto.spec.SecretKeySpec;
import org.bouncycastle.util.encoders.Base64;
Cipher c = Cipher.getInstance(ALGO);
c.init(Cipher.ENCRYPT_MODE, key);
return encryptedValue;
31
}
throws Exception {
Cipher c = Cipher.getInstance(ALGO);
c.init(Cipher.DECRYPT_MODE, key);
.getBytes()));
return decryptedValue; }
try {
catch (Exception e) {
32
e.printStackTrace(); }}
Attacker.java
import java.awt.Color;
import java.awt.Font;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.io.DataInputStream;
import java.io.DataOutputStream;
import java.net.InetAddress;
import java.net.ServerSocket;
import java.net.Socket;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.Statement;
import java.util.Vector;
import javax.swing.*;
JFrame f;
JPanel p;
33
JLabel l1,l2,l3;
JButton b1,b2;
ImageIcon ic;
JTextField tc;
public Attacker() {
p=new JPanel();
f.setSize(500, 650);
f.setVisible(true);
p.setLayout(null);
f.add(p);
b2=new JButton("Attack");
34
p.add(b2);
tc=new JTextField();
p.add(tc);
l1.setFont(f1);
p.add(l1);
tf.setColumns(200);
tf.setRows(100);
tf.setName("tf");
pane1.setName("pane");
pane1.setViewportView(tf);
b1.addActionListener(this);
b2.addActionListener(this);
th.start(); }}
35
public static void main(String[] args) {
new Attacker(); }
DataInputStream in = null;
ServerSocket server;
Socket connection;
int i;
String fileid;
Connection con;
Statement stmt;
int port;
this.port = port; }
if(this.port==1006) {
}else
if(this.port==201)
{}
}}
36
@Override
if(ae.getSource()==b2){
try {
InetAddress ia = InetAddress.getLocalHost();
dos.writeUTF(tc.getText());
dos.writeUTF(file);
dos.writeUTF(ip2);
dos.writeUTF(name);
String msg=diss.readUTF();
System.out.println(""+ msg);
if(msg.equals("Attcker")){
37
JOptionPane.showMessageDialog(null,"Server Audited and will not
allow....You are an Attacker!!!!"); }
if(msg.equals("found")) {
catch (Exception e) {
38
CHAPTER 12
SCREEN SHOTS
39
CHAPTER 12
CONCLUSION
Cloud storage auditing is an extremely important technique for resolving the problem of
ensuring the integrity of stored data in cloud storage. Because the need for the concept is
shared, many schemes providing different functions and security levels have been proposed.
In 2019, Tian et al. proposed a scheme that supports data privacy, identity traceability, and
group dynamics and claimed that their scheme is secure against collusion attacks between the
CSPs and revoked users. In this paper, we showed in their scheme that a tag can be forged
from a valid message and tag pair without knowing any secret values. We also showed that a
proof can be forged by a collusion attack, even if some challenged messages have been
deleted. We then proposed a new scheme that is secure against the above attacks while
providing the same functionality as their approach. We also provided formal security proofs
and an analysis of the computation costs of
both schemes.
40
CHAPTER 13
REFERENCES
Juels and B. S. Kaliski, ``PORs: Proofs of retrievability for large _les,'' in Proc. 14th
ACM Conf. Comput. Commun. Secur. (CCS), Oct. 2007, pp. 584_597.
41
C. Wang, Q. Wang, K. Ren, and W. Lou, ``Privacy-preserving public auditing for data
storage security in cloud computing,'' in Proc. IEEE INFOCOM, Mar. 2010, pp. 1_9.
Z. Hao, S. Zhong, and N. Yu, ``A privacy-preserving remote data integrity checking
protocol with data dynamics and public veri_ability,'' IEEE Trans. Knowl. Data Eng.,
vol. 23, no. 9, pp. 1432_1437, Sep. 2011.
K. Yang and X. Jia, ``An ef_cient and secure dynamic auditing protocol for data
storage in cloud computing,'' IEEE Trans. Parallel Distrib. Syst., vol. 24, no. 9, pp.
1717_1726, Sep. 2013.
42