0% found this document useful (0 votes)
6 views31 pages

SPC Labmanual Final

The document outlines a series of exercises for simulating various cloud computing scenarios using CloudSim, including scheduling algorithms, resource management, log forensics, secure file sharing, data anonymization, and access control mechanisms. Each exercise includes aims, procedures, and source code examples for implementation. The results and outputs of the simulations depend on the specific algorithms and strategies employed, with performance metrics such as resource utilization and execution times being analyzed.

Uploaded by

Yuva Priya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views31 pages

SPC Labmanual Final

The document outlines a series of exercises for simulating various cloud computing scenarios using CloudSim, including scheduling algorithms, resource management, log forensics, secure file sharing, data anonymization, and access control mechanisms. Each exercise includes aims, procedures, and source code examples for implementation. The results and outputs of the simulations depend on the specific algorithms and strategies employed, with performance metrics such as resource utilization and execution times being analyzed.

Uploaded by

Yuva Priya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

INDEX

S PAGE
DATE TITLE MARK SIGN
NO: NO

Simulate a cloud scenario using cloud Sim and


1 run a scheduling algorithm not present in cloud 1
Sim

2 Simulate resource management using cloud sim 6

3 Simulate log forensics using cloud sim 10

4 Simulate a secure file sharing using a cloud sim 14

Implement data anonymization techniques over


5 the simple dataset (masking, 19
K anonymization, etc)

Implement any encryption algorithm to protect


6 21
the images.

7 Implement any image obfuscation mechanism 23

25
Implement a role-based access control
8
mechanism in a specific scenario

Implement an attribute-based access control


9 mechanism based on a particular 27
scenario

Develop a log monitoring system with incident


management in the cloud 29
10

0
EX NO:01 SIMULATE A CLOUD SCENARIO USING CLOUD SIMNOT
PRESENT IN CLOUD SIM AND RUN A SCHEDULING
DATE: ALGORITHM

AIM
To simulate a cloud scenario using CloudSim and run a scheduling algorithm -
that is not present in CloudSim.
PROCEDURE :
1. Set up the developmient ‘environment :
Install Java Development Kit (JDK).
Download the CloudSim library (version 3.0.3 or later) and include it in the project.
2. Import the required CloudSim packages ;
import org.cloudbus.cloudsim.*;
import org.cloudbus.cloudsim.core.CloudSini;
import java. util.*
3. Create a new Java class for the simulation, e.g., "CustomSchedulingSimulation”.
4. Implement the custom scheduling algorithm :
Define the criteria or objectives the want to optimize in the scheduling
algorithm,such as minimizing makespan, _maximizing resource utilization, ‘or
improving -response time.
Design and implement a scheduling alporithnt that considérs these objectives.
Consider factors suchas task prioritization, resource availability, task
dependencies,load balancing, etc., depending on the objectives.
5. Create a datacenter :
1.Define the ‘characteristics of the datacenter, such as the number of hosts, host
properties (MIPS, RAM, storage, bandwidth), and VM provisioning policies.
2.Use classes like DatacenterCharacteristics, Host, Vm, and VmAllocationPolicy in
CloudSim to create the datacenter.
6. Create a broker:
• Define the broker that will manage the cloudlets and interact with the datacenter.
• Use the DatacenterBroker class in CloudSim to create the broker.
7. Create and submit cloudlets :
• Define the cloudlets with their characteristics, such as length, utilization model and
data transfer size.
• Use the Cloudlet class in CloudSim to create the cloudlets.
• Submit the cloudlets to the broker using the submitCloudletList() method.
8. Set the custom scheduling algorithm :
• Create a class that extends the VmAllocationPolicy class in CloudSim.
1
• Override the allocateHostForVmQ method to implement the custom scheduling
algorithm.
• Consider the objectives and criteria defined in step 4 to allocate VMs to suitable
hosts based on the scheduling policy.
9. Start the simulation :
• Initialize the CloudSim simulation environment using CloudSim.initQ.
• Set the datacenter and VM allocation policy for the broker.
• Start the simulation using CloudSim.startSimulation().
10. Stop the simulation :
• Stop the simulation using CloudSim.stopSimulation().
11. Process the results and generate output :
• Retrieve the results from the broker, such as the list of finished cloudlets and their
execution details.
• Analyze and process the results based on the objectives and criteria of the custom.
scheduling algorithm.
• Generate the desired output, such as performance metrics, execution times, resource
utilization; etc.
SOURCE CODE
import org. cloudbus. cloudsim.*
import org.cloudbus.cloudsim.core. CloudSim; .
import java.util.*;
public class CustomSchedulingSimulation {
public static void main(String[] args) {
// Initialize the CloudSim simulation environment
int numUsers = 1;
Calendar calendar = Calendar. getInstance();
CloudSim.init(numUsers, calendar, false);
// Create a datacenter
Datacenter datacenter = createDatacenter("Datacenter_0');
// Create a broker : . :
DatacenterBroker broker = createBroker();
// Set the custom VM allocation policy
VmAllocationPolicy policy = new CustomSchedulingPolicy(datacenter. .getHostList());
broker. setDatacenter(datacenter);
broker.setVmAllocationPolicy(policy);
// Create and Eabrait cloudlets to the broker
Int numVMs = 5;
int numCloudlets = 10;
createVMsAndCloudlets(broker, numVMs, numCloudlets);
// Start the simulation
CloudSim.startSimulation();
// Process the results and generate output
List<Cloudlet> finishedCloudlets = = broker. getCloudletReceivedList();
// Perform necessary calculations and analysis
_// Stop the simulation ~~"
CloudSim.stopSimulation();

2
// Display the results
printResults(finishedCloudlets);
private static Datacenter createDatacenter(String name) {
List<Host> hostList = new ArrayList <>();
// Create hosts with required characteristics
// Define host properties like MIPS, RAM, storage, bandwidth, etc.
// Use Host and othe related classes in CloudSim
for (inti=O;i<3;i++){
int mips = 1000; // Example MIPS value .
int ram = 2048; // Example RAM value
long storage = 1G00000; // Example storage value
int bw = 10000; // Example bandwidth value
hostList.add(new Host(i, new RamProvisionerSimple(ram), new BwProvisionerSimple(bw),
storage, new ArrayList<Pe>(), new VmSchedulerSpaceShared(new ArrayList<Pe>()))):
}
// Create DatacenterCharacteristics and retum a Datacenter object
String arch = "x86":
String os = "Linux";
String vmm = "Xen";
double time_zone = 10.0:
double cost = 3.0;
double costPerMem = 0.05;
double costPerStorage = 0.001;
double costPerBw = 0.0;
DatacenterCharacteristics characteristics = new DatacenterCharacteristics(arch, os, vmm,
hostList, time_zone, cost, costPerMem, costPerStorage, costPerBw);
Datacenter datacenter = null;
try {
datacenter = new Datacenter(name, characteristics, new ViAllocationPolicySimple(hostList)
new ArrayList <Storage>(), 0);
} catch (Exception e) {
e.printStackTrace();
}
returm datacenter;
}
private static DatacenterBroker createBroker() {
DatacenterBroker broker = null;
try {
broker = new DatacenterBroker("Broker’);
} catch (Exception e) {
e.printStackTrace();
}
return broker;
}
private static void createVMsAndCloudlets(DatacenterBroker broker, int numVMs,int
numCloudlets) {
List<Vm> vmList = new ArrayList<>();
List<Cloudlet> cloudletList = new ArrayList<>();
3
// Create VMs with required characteristics
// Define VM properties like MIPS, RAM, storage, bandwidth, etc, | ‘
// Use Vm and othe related classes in CloudSim
for (int i = 0; i < numVMs; i+ +) {
int mips = 1000; // Example MIPS value
int ram = 512; // Example RAM value
long size = 10000; // Example storage value
int bw = 1000; // Example bandwidth value
int pesNumber = 1;
Vm vm = new Vm(i, broker.getId(), mips, ‘pesNumber, ram, bw, Size, "Xen", new
CloudletSchedulerTimeShared());
vmList.add(vm);
}
// Create cloudlets with required characteristics’
// Define cloudlet length, utilization model, etc.
// Use Cloudlet and othe related classes in CloudSim
for (inti = O;i< numCloudlets; i++) {-
‘long length = 10000; // Example cloudlet length
int pesNumber = 1;
long fileSize = 300;
long outputSize. = 300; -
UtilizationModel utilizationModel = new UtilizationModelFull(};
Cloudlet cloudlet = new Cloudlet(i, length, pesNumber, fileSize, outputSize;
utilizationModel,utilizationModel, utilizationModel);
cloudlet.setUserld(broker.getId());
cloudletList.add(cloudlet);
broker. bindCloudletToVm(cloudlet. getld(), vmList.get(i % numVMs).getId()); // Assign
VMs to cloudlets
}
broker.submitVmList(vmList); ;
broker.submitCloudletList(cloudletList);
}
private static void printResults(List<Cloudlet> cloudlets) {
// Process and print the simulation results ;
// Display performance metrics like makespan, résource uullialontee response time, etc.
for (Cloudlet cloudlet : cloudlets) {
System.out. printin("Cloudlet ID: " + cloudlet.getCloudletId() + " /VM ID: " + cloudilet.
getvmld(
+", Status: " + cloudlet.getStatus() + ", Start Time: "+ cloudlet. getExecStartTime()
+", Finish Time: " + cloudlet.getFinishTime());
}}}

4
OUTPUT
--- Cloud Scenario Simulation ---

RESULT AND OUTPUT :


The result and output of the’ simulation will depend on the specific scheduling algorithm the
implement and the characteristics of the simulated cloud scenario. It can analyze performance metrics
such as makespan, resource utilization, response time, and any othe metrics relevant to its. custom
scheduling algorithm. The specific output and result analysis will vary based on the implementation
and the evaluation criteria, choose for the scheduling algorithm. It can print the output within the code
using System.out.printin() statements or save the results to a file for further analysis.
5
EX NO:02 SIMULATE RESOURCE MANAGEMENT USING CLOUD
DATE: SIM

AIM:
The aim is to simulate resource management using CloudSim, which involves managing the
allocation and utilization of resources in a cloud environment. The objective is to optimize resource
allocation, maximize resource utilization and improve overall system performance.
PROCEDURE :
1. Set up the development environment :
Install Java Developnient Kit (JDK).
Download the CloudSim library (version 3. 7 3 or later) and include it in the project.
2. Import the required CloudSim packages:
import org.cloudbus. cloudsim.* .
import org. cloudbus.cloudsim.core.Cloudsim;
import java.util.*;
3. Create a new Java class for the simulation, e.g., "ResourceManagementSimulation’.
4. Initialize CloudSim :
Initialize the CloudSim Suftulation environment with the number of users and the
simulation calendar. .
Set the simulation parameters, such as the simulation duration and whether to trace the
simulation progress.
int numUsers = 1;
Calendar calendar = Calendar.getInstance();
CloudSim.init(numUsers, caleridar, false);
5. Create a datacenter :
o Define the characteristics of the datacenter, such as the number of hosts, host
properties (MIPS, RAM, storage, bandwidth), and VM provisioning policies.
o Use classes like DatacenterCharacteristics, Host, Vm, and VmAllocationPolicy in
CloudSim to create the datacenter.
Datacenter datacenter = createDatacenter("Datacenter');
6. Create a broker :
• Define the broker that will manage the allocation and utilization of resources.
• Use the DatacenterBroker class in CloudSim to create the broker.

6
DatacenterBroker broker = createBroker();
7. Create VMs and cloudlets :
Define the virtual machines (VMs) with their characteristics, such as MIPS, RAM, storage,
and bandwidth.
Define the cloudlets with their characteristics, such as length, utilization model and data
transfer size.
List<Vm> vmList = createVMs(numVMs);
List<Cloudlet> cloudletList = createCloudlets(numCloudlets);
8. Submit VMs and cloudlets to the broker :
• Use the. submit VmList() method to submit the list of VMs to the broker.
• Use the submitCloudletList() method to submit the list of cloudlets to the broker. ”
broker. submitVmList(vmList);
broker. submitCloudletList(cloudletList);
9. Runt the simulation :
Start the simulation using, CloudSimstartSimulation().
CloudSim will simulate the resource management based « on the defined
datacenter,broker, VMs, and cloudiets.
Cloudsim. startSimulation();
10.Stop the simulation‘.
Stop the simulation using Cloudsim. stopSimulation(.
This will halt the simulation and collect t the results.
CloudSim. stopSimulation();
11.Process the results and generate output :
Retrieve the results from the broker, such as the list of finished éloudlets and their -
‘execution details. '
Analyze and process the results to evaluate the resource managemeiit performance:
Generate the desired output, such as performance metrics, resource
utilization,execution times, etc.
List <Cloudlet> finishedCloudlets = = broker. getCloudletReceivedList();
printResults (finishedCloudlets),
SOURCE CODE
import org.cloudbus.cloudsim.*;
import org.cloudbus.cloudsim.core.CloudSim; .
import java.util.*;
Public class ResourceManagementSimulation {
7
Public static void main(String|] args) {
int numUsers = 1;
Calendar calendar = Calendar.getInstance();
CloudSim.init(numUsers, calendar, false);
Datacenter datacenter = createDatacenter("Datacenter’);
DatacenterBroker broker = createBroker();
int numVMs = 10;
int numCloudlets = 20; f
List<Vm> vmList = createVMs(numVMs);
List <Cloudlet> cloudletList = createCloudlets(numCloudlets);
broker.submitVmList(vmList);
broker.submitCloudletList(cloudletList);
CloudSim.startSimulation();
CloudSim.stopSimulation();
List<Cloudlet> finishedCloudlets = broker.getCloudletReceivedList();
printResults(finishedCloudlets);
}
private static Datacenter createDatacenter(String name) {
// Create and configure the datacenter ;
// Use classes like DatacenterCharacteristics, Host, VmAllocationPolicy, etc.
// Return the created Datacenter object ,
}
private static DatacenterBroker createBroker() {
// Create and configure the broker
// Use the DatacenterBroker class
// Return the created DatacenterBroker object ~
}
private static List<Vm> createVMs(int numVMs) {
// Create and configure the virtual machines (VMs)
// Set VM properties like MIPS, RAM, storage, and bandwidth
// Return the list of created VMs
}
private static List<Cloudlet> createCloudlets(int numCloudlets) {
// Create and configure the cloudlets ;
// Set cloudlet properties like length, utilization model, and data transfer size
// Return the list of created cloudlets
}
private static void printResults(List <Cloudlet> cloudlets) {
// Process and print the results
// Analyze the finished cloudlets and generate desired output
}}

8
OUTPUT

RESULT :
The result and output of the simulation will depend on the specific resource management
strategies implemented and the characteristics of the simulated cloud scenario. It can analyze various
performance metrics such as makespan, resource utilization, response time, throughput, etc. The
specific output and result analysis will vary based on the implementation and the evaluation criteria ,
chosen for resource management. It can print the output within the code using System.out.printIn()
statements or save the results to a file for furthe analysis.

9
EX NO:03
SIMULATE LOG FORENSICS USING CLOUD SIM
DATE:

AIM :
The aim is to simulate resource management using CloudSim, which involves managing the
allocation and utilization of resources in a cloud environment. The objective is to optimize resource
allocation, maximize resource utilization, and improve Overall system performance.
PROCEDURE :
1.Set up the development environment :
Install Java Development Kit (JDK),
Download the CloudSim library (version 3.0.3 or later) and include it in the project.
2.Import the required CloudSim packages :
import org.cloudbus.cloudsim.*; import org.cloudbus. cloudsim. core.CloudSim,
import java.util.*;
3. Create a new Java class for the siiinilation, e.g., "ResourceManagementSimulation",
4.Initialize CloudSim :
Initialize the CloudSim simulation environment with the number of users and the
simulation calendar.
Set the simulation parameters, such as the simulation duration and whether to trace the
simulation progress.
int numUsers = 1; Calendar calendar = Calendar. getinstance();
CloudSim. init(numUsers, calendar, false);
5.Create a datacenter :
Define the characteristics of the datacenter, such as the number of hosts, host
properties (MIPS, RAM, storage, bandwidth), and VM provisioning policies.
Use classes like DatacenterCharacteristics, Host, Vm, and VmAllocationPolicy in
CloudSim to create the datacenter.
Datacenter datacenter = createDatacenter("Datacenter");
6.Create a broker :
Define the broker that will manage the allocation and utilization of resources.
Use the DatacenterBroker class in CloudSim to create the broker.
DatacenterBroker broker = createBroker‘();
7. Create VMs and cloudlets :
10
• Define the Virtual Machines (VMs) with their characteristics, such as MIPS, RAM,storage
and bandwidth.
• Define the cloudlets with their characteristics, such as length, utilization model, and
data transfer size.
List<Vm> vmList = createVMs(numVMs); List <Cloudlet> cloudletList
= createCloudlets(numCloudlets);
8.Submit VMs and cloudlets to the broker :
• Use the submitVmList() method to submit the list of VMs to the broker.
• Use the submitCloud|etList() method to submit the list of cloudlets to the broker.
broker.submitVmList(vmList); broker.submitCloudletList(cloudletList);
9. Run the simulation : .
• Start the simulation using CloudSim:startSimulation().
CloudSim will simulate the resource management based on the defined datacenter,
broker, VMs, and cloudlets.
CloudSim. startSimulation();
10. Stop the simulation :
Stop the simulation using CloudSim. stopSimulation().
This will halt the simulation and collect the results.
CloudSim. stopSimulation(); ,
11. Process the results and generate output :
• Retrieve the results from the broker, such as the list of finished cloudlets and their
execution details. .
• Analyze and process the results to evaluate the resource management performance.
• Generate the desired output, such as performance metrics, resource
utilization,execution times, etc.
List<Cloudlet> finishedCloudlets = broker. getCloudletReceivedList();
printResults(finishedCloudlets);
SOURCE CODE :
import org.cloudbus.cloudsim.*;
import org.cloudbus.cloudsim.core.CloudSim;
import java.util.*;
public class ‘LogForensicsSimulation {
public static void main(String[] args) {
int numUsers = 1; .
Calendar calendar = Calendar.getInstance();
CloudSim.init(numUsers, calendar, false);
List-<LogEntry> logData = = generateLogData();
11
List<LogEntry> guspiciousActivities = detectSuspiciousActivities(logData);
List <LogEntry> anomalies = detectAnomalies(logData);
printSuspiciousActivities(suspiciousActivities),
printAnomalies(anomalies),
}
private static List<LogEntry> generateLogData() {
// Generate or retrieve log data for the simulation
// Simulate log entries with various ‘attributes like timestam, p, source IP, destination IP, log
message, etc.
// Return the generated log data as a list of LogEntry objects
}
private static List<LogEntry> detectSuspiciousActivities(List <LogEntry > logData) {
// Implement log analysis algorithms to detect suspicious activities
// Use pattern matching, machine learning, statistical analysis, etc.
// Return the list of detected suspicious activities as LogEntry objects.
}
private static List<LogEntry> detectAnomalies(List <LogEntry> logData) {
// Implement log analysis algorithms to detect anomalies
// Use pattern matching, machine learning, statistical analysis, etc.
// Return the list of detected anomalies as LogEntry objects }
private static void printSuspiciousActivities(List<LogEntry> suspiciousActivities) {
// Print or process the list of detected suspicious activities .
// Generate alerts, reports, or ‘visualizations based on mie detected activities }
private static void printAnomalies(List<LogEntry> anomalies) {
// Print or process the list of detected anomalies
// Generate alerts, reports, or visualizations based on the detected anomalies
}}

12
OUTPUT

RESULT:
The result and output of the simulation will depend on the log data generated and the log
analysis algorithms implemented. The can analyze the log data to detect suspicious activitics and
anomalies, and generate output such as alerts, reports, or visualizations based on the findings. The
specific output and result analysis will vary based on the implementation and the log analysis.
techniques used. The. can print the output. within the code using System.out.printIn() statements or
save the results to a file or database for furthe analysis and reporting.

13
EX NO:04 SIMULATE A SECURE FILE SHARING USING A CLOUD
DATE: SIM

AIM:
The aim is to simulate a secure file sharing system using CloudSim. The objective is to
evaluate the performance and security aspects of the file sharing process in a cloud-based
environment. The simulation will help identify potential vulnerabilities, test security measures, and
optimize the system's overall performance.
PROCEDURE :
1. Set up the development environment :
• Install Java Development Kit (JDK). °
• Download the CloudSim library (version 3.0.3 or later) and include it in the project.
2. Import the required CloudSim packages :
javaCopy code .
import org.cloudbus.cloudsim.*;
import org.cloudbus.cloudsim.core.CloudSim;
import java.util.*;
3. Create a new Java class for the simulation, ¢.g., "SecureFileSharingSimulation".
4. Initialize CloudSim :
Initialize the CloudSim simulation environment with the number of users and the
simulation calendar.
Set the simulation parameters, such as the simulation duration and whether to trace
the simulation progress.
int numUsers = 1; Calendar calendar = Calendar.getInstance(); CloudSim.init(numUsers,
calendar, false);
5. Create cloud entities :
Create a datacenter that represents the cloud infrastructure where file sharing will
take place.
Create a set of users who will upload and download files
Datacenter datacenter = croateDatacenter(); List<User> users = createUsers();
6. Implement secure file sharing mechanisms :
Design and implement security measures such as authentication, access control,

14
encryption, etc.
Define the file sharing protocols and mechanisms.
Implement secure‘communication channels between users and the cloud.
void uploadFile(User user, String filename, byte{] fileData) {
// Implement the secure file upload mechanism '
// Perform necessary security checks, encryption, and store the file i the cloud storage
}
byte[] downloadFile(User user, String filename) {
// Implement the secure file download mechanism ;
// Perform necessary security checks, decryption, and retrieve the file from the cloud storage
// Return the downloaded file data asa byte array ‘
}
7. Simulate file sharing activities :
• Simulate user activities such as uploading files, downloading files, and measuring:
performance metrics.
• Generate file requests and simulate the file sharing process in the cloud environment.
• Measure performance metrics like response time, throughput, and security-related -metrics.
List <FileRequest> fileRequests = generateFileRequests();
for (FileRequest request : fileRequests) {
User user = selectUser();
byte] fileData = generateFileData(request. getSize());
uploadFile(user, request.getFileName(), fileData);
byte[] downloadedData = downloadFile(user, request. getFileName()),
// Perform validation or analysis of the downloaded file
// Measure and record performance metrics
}
8. Process the results and generate output :
• Analyze the simulation results, including performance metrics and peu related findings.
• Generate reports,.charts, or visualizations based on the’simulation output.
• Evaluate the system's performance, security and Benne improvements.
generateSimulationReport() generatePerformanceMetrics();

15
SOURCE CODE:
import org.cloudbus.cloudsim.*;
import org.cloudbus.cloudsim.core.CloudSim;
import java.util.*;
public class SecureFileSharingSimulation {
public static void main(String!] args) {
int numUsers = 1;
Calendar calendar = = Calendar. getInstance():
CloudSim.init(numUsers, calendar, false);
Datacenter datacenter = createDatacenter();.
List<User> users = createUsers();
List<FileRequest> fileRequests = generateFileRequests(); !
for (FileRequest request : fileRequests) {
User user .= selectUser(users);
byte[] fileData = generateFileData(request. getSize());
uploadFile(user, request. getFileName(), fileData);
byte[] downloadedData = downloadFile(user, request. getFileName()
// Perform validation or analysis of the downloaded file
// Measure and record performance metrics
}
generateSimulationReport();: bight Fete
generatePerformanceMetrics(); oS po
}
private static Datacenter crodtsDatacenter() {
// Implement the creation and configuration of a datacenter in CloudSim
// Set up the datacenter's resources, such.as hosts, VMs, and storage
// Configure networking; security, and othe properties as needed
return null;
}
private static List<User> createUsers() {
// Implement the creation and configuration of users in CloudSim
}
// Set up user properties, such as credentials, access privileges, etc.
// Create user entities and associate them with the datacenter
returm null;
private static User selectUser(List<User> users) {
// Implement user selection logic for file sharing activities a
// Choose a user from the list of available users based on a specific algorithm or criteria
return null;
private static List<FileRequest> generateFileRequests() {
// Implement the generation of file requests for simulation
// Generate a list of file requests with properties like file name, size, etc.
Tetum null;
private static byte[] generateFileData(int fileSize) {
// Generate random file data of the specified size for simulation
return null;
private static void uploadFile(User user, String filename, bytell fleData) {
// Implement the secure file upload mechanism
16
// Perform necessary security checks, encryption, and store the file in the cloud storage
}
private static byte[] downloadFile(User user, String filename) {
// Implement the secure file download mechanism
// Perform necessary security checks, decryption, and retrieve 2 the file from the cloud storage
// Return the downloaded file data as a byte array
return null;
private static void generateSimulationReport() {
// Generate a report based on the simulation results
// Include information on the file sharing activities, security aspects, and performance metrics
private static void generatePerformanceMetrics() {
// Generate performance metrics based on the simulation results
// Calculate metrics like response time, throughput, security-related metrics, etc.
}}

OUTPUT

17
RESULT
The specific result and output of the simulation will depend on the implementation of the file
sharing mechanisms, security measures and performance metrics. The output may include
information such as :
• Simulation progress and duration
• File upload and download activities
• Performance metrics (e.g., response time, throughput)
• Security-related metrics (e.g., authentication success rate, data encryption level)
• Simulation reports, charts, or visualizations
It can customize the output based on the specific requirements and the metrics choosen to
measure. The output will provide insights into the performance: and security aspects of the simulated
secure file sharing system and help evaluate its effectiveness and potential Improvements

18
EX NO:05 IMPLEMENT DATA ANONYMIZATION TECHNIQUES OVER
DATE: THE SIMPLE DATASET (masking,kanonymization, etc)

AIM :
The aim of masking is to replace sensitive data with a non-sensitive placeholder value while
preserving the structure and format of the original data.
PROCEDURE :
1, Identify the sensitive attribute(s) in the dataset, such as names or email addresses.
2. Replace the sensitive values with a masking value (e.g., "X" or “*********”).
3. Ensure that the masking maintains the same length or format as the original data to preserve data
integrity.
4. Generate a new anonymized dataset with masked values.
SOURCE CODE:
import pandas as pd
# Original dataset
data = pd.DataFrame({
‘Name’: (‘John Doe’, 'Jane Smith’, ‘Michael Johnson'J,
'Email':[[email protected]'’,'[email protected]’,‘[email protected]
m'),‘Age’: (25, 30, 35]
})
# Masking sensitive attributes
data['Name'] = 'XXXXXXXXXX'
data|['Email'] = “xxxxxxxxxx’
# Output anonymized dataset
print(data)
OUTPUT

RESULT:
The sensitive. attributes, Name and Email, have been replaced with masking values,
ensuring the original structure and format of the dataset are maintained.

19
K-ANONYMIZATION :
AIM
The aim of k-anonymization is to generalize or suppress certain attributes in the dataset to
ensure that each record is indistinguishable from at least k-1 othe records.
PROCEDURE :
1.Select a value of k (e.g., 5) to determine the level of anonymity.
2.Identify the quasi-identifiers (attributes that can potentially identify individuals when combined) in
the dataset.
3.Generalize or suppress the quasi-identifiers to achieve k-anonymity, ensuring that each combination
of quasi-identifiers is represented by at least k-1 othe records.
4.Generate a new anonymized dataset with generalized or suppressed values.
5.Note : Implementing k-anonymization can be more complex and requires domain-
specific knowledge to determine appropriate generalization techniques.
SOURCE CODE:
import pandas as pd
# Original dataset
data =pd. DataFrame({
‘Name’: ['John Doe’, ‘Jane Smith’, ‘Michael Johnson’,
‘Zip Code’: ['12345', '67890', '54321'],
‘Age’: [25, 30, 35] ;
})
#K-anonymization with generalization
data['Name'] = 'Anonymous' ~
data['Zip Code'] = "XXXXXX'
# Output anonymized dataset
print(data)
OUTPUT:

RESULT:
The quasi-identifiers, Name and Zip Code, have been generalized to "Anonymous"
and"XXXXX," respectively, ensuring each record is indistinguishable from at least k-1 otherecords
(in this case, 2-1=1). The original structure and format of the dataset are preserved.

20
EX NO:06 IMPLEMENT ANY ENCRYPTION ALGORITHM TO
DATE: PROTECT THE IMAGES

AIM:
The aim is to encrypt an image file using the AES encryption algorithm to protect its contents from
unauthorized access.
PROCEDURE:
1. Choose an encryption algorithm : Select a suitable encryption algorithm such as AES
(Advanced encryption standard) or RSA(Rivest-Shamir-Adleman).
2. Generate an encryption key: Generate a strong encryption key that will be used to encrypt
and decrypt the images. The key should be kept secure and only accessible to authorized
users.
3. Encrypt the images: Use the chosen encryption algorithm and the generated key to encrypt
the image files. Iterate through each image file, read its contents, encrypt the data using the
encryption key and write the encrypted data to a new file.
4. Choose a cloud storage service: Select a cloud storage service provider that meets the
requirements in terms of security, reliability and cost.
5. Upload the encrypted images: Use the cloud storage provider’s API or client library to
upload the encrypted image files to the cloud. Follow the appropriate documentation and
guidelines provided by the cloud service to ensure a secure upload process.
6. Manage encryption keys: Implement a secure key management system to store and manage
the encryption keys. The system should enforce access controls and provide secure storage
keys.

SOURCE CODE:
from Crypto. Cipher import AES
from Crypto. util. Padding import pad
import boto3
#Set AWS S3credentials and bucker name
AWS_ACCESS_KEY_ID =’the_access_key’
AWS_SECRET_ACCESS_KEY =’the_secret_access_key’
BUCKET_NAME=’the_bucket_name’
#Set encryption key (must be16,24, or 32 bytes long)
encryption _key=b’ThisIsASecretKet!’
defencrypt_image(input_file):
#Read the image file
with open(input_file,’rb’) as file:
image_data=file.read()
#Generate a random initialization vector (IV)
iv=os.urandom(16)
#Create an AES cipher object
cipher=AES.new(encyprtion_key,AES.MODE_CBC,IV)
21
#Pad the image data
padded_data=pad(image_data,AES.block_size)
#Encrypt the padded data
encypted_data=cipher.encrypt(padded_data)
#Return encrypted data and IV
return encrypted_data; iv
def upload_encypted_image (encypted_data,iv,filename)
#Create an S3 client
S3=boto3.client(‘s3’,aws_access_key_id = AWS_KEY_ID
aws_secret_access_key=AWS_SECRET_ACCESS_KEY
#Uplaod encrypted data as an S3 object
S3.put_object (Body=encrypted_data,Bucket=BUCKET_NAME,Key=filename)
#Upload IV as a separate S3 object
iv_filename = ‘f{filename}. iv’
S3.put_object (Body=iv.bucket=BUCKET_NAME,key=iv_filename)
#Set the path to the image file
input_file=’original_image.jpg’
#Encrypt the image
encrypted _data, iv=encrypt_image(input_file)
#Set the file name for the encrypted image
filename=’encrypted_image.jpg’
#Upload the encrypted image as S3
upload_encrypted_image(encrypted_data,iv,filename)

OUTPUT:
Image encrypted successfully
Image decrypted successfully

RESULT:
The script will encrypt the image using AES encryption and produce an encrypted image file
encrypted_image.jpg in the same directory.
22
EX NO:07
IMPLEMENT ANY IMAGE OBFUSCATION MECHANISM
DATE:

AIM:
The aim is to obfuscate an image in the cloud by applying a blurring filter to make it less
recognizable.
PROCEDURE:
1. Choose a cloud –based image processing service: select a cloud service provider that offers
image processing capabilities. In this example, we will use the google cloud vision API.
2. Set up google cloud vision API: Sey up a google cloud project and enable the vision API.
Obtain the necessary API credentials and install the cloud python library.
3. Authenticates with google cloud vision API: use the API credentials to authenticate the
application and establish a connection to vision API.
4. Obfuscate the image using with blurring: Send the image to the vision API and apply a
blurring filter to obfuscate it. The API provides various image manipulations options.
5. Retrieve and save the obfuscated image: Receive the modified image from the vision API
response and save it to cloud or download it locally.

SOURCE CODE:
import io from google. cloud import vision
def obfuscate_image(image_path)
#Authenticate with google cloud vision API
client =various _image annotator client()
#Read the image file
with.io.open(image_path,’rb’) as image_file:
content =image_file.read()
#Create a vision API image object
image=vision. image (content =vision)
#Apply blurring to obfuscate the image
response=client.safe_search_detection (image=image)
#Safe the obfuscated image
output_path= obfuscated_image.jpg
blurred_image.save(output_path,’JPEG’)
return output path
#See the image to the path file
image_path=’original_image.jpg’
#Obfuscate the image
obfuscated_image_path=obfuscate_image(image_path)
#Print the path to the obfuscated image
print,’Obfuscated image path:’,obfuscated_image_path

23
OUTPUT:
Upon successful execution the script will obfuscate the image using the blurring filter from
the google cloud vision API. The resulting obfuscated image will be saved as obfuscated image.jpg
in the sane directory. The script will print the path to the obfuscated image.

RESULT:
The image will be visually obfuscated by applying a blurring filter. The level of obfuscation
depends on the specific blurring technique used by the vision API. The resulting obfuscated image
can help sensitive while preserving the overall structure and content of the original image.
24
EX NO:08 IMPLEMENT A ROLE-BASED ACCESS CONTROL
DATE: MECHANISM IN A SPECIFIC SCENARIO SOLUTION

AIM:
The aim is to implement a role-based access control (RBAC) mechanism in a specific cloud
scenario to manage and enforce access control policies based on our roles.
PROCEDURE:
1. Choose a cloud provider with RBAC support: Select a cloud provider that offers RBAC
capabilities. In this example, we will use Microsoft azure.
2. Define user roles: Identify the different roles needed for the cloud scenario. Roles could include
administration, developers, and end users. Define the specific permissions and access levels
associated with each role.
3. Create RBAC roles: Create RBAC roles within the providers RBAC service. Define the
necessary permissions for each role based on the requirements.
4. Assign roles to users: Assign the appropriate roles to the users or groups within the cloud
provider's RBAC service. Users can be assigned one or more roles depending on their
responsibilities.
5. Implement access control checks: Within the cloud application or infrastructure, implement
access control checks based on the user’s role. This can be achieved by leveraging the RBAC
service provided by the cloud provider.

SOURCE CODE:
The implementation of RBAC is specific to the cloud provider and the programming language
used for the application. Below is an example using python, and the azure SDK:
from azure. identity. import default. azure credential
from azure. key vault. secrets. import secret client
#Setup azure credential and client
credential =default. azure credential ()
secret_client == server client (vault _URL=’<the _key vault _URL', credential=credential)
#Define RBAC roles and associated permissions
roles= {
‘admin’: [’read’,’write’,’delete’],
‘developer’: [‘read',' write’],
‘end_user’; ['read’]}
#Define user roles
user_roles = {
[email protected]’ :’admin’,
[email protected]’ : ‘developer’,
[email protected]’ :’end user’}
# Get the logged in user’s email (replace this with the authentication logic)
logged_in_user_email = ‘[email protected]
#Check access based on user’s role
def check_acccess (permissions):
25
if logged _ in _ user _email in user _roles:
user _role = user _roles [logged_ in _user _ email]
if permission in roles [user _roles]:
return true
return false
#Example usage: checking if user can write
can _write =check _ access (‘write’)
print (‘user can write’: can _write)

OUTPUT:
The output of the script will be Boolean value indicating whether the logged in user has the
necessary permissions based on their assigned role. In this example, it will print whether the user can
write or not.

RESULT:
The RBAC mechanism implemented allows to manage access control based on user roles in
the cloud application or infrastructure. Users are assigned roles with specific permissions, and access
control checks are performed based on those roles.
26
EX NO:09 IMPLEMENT AN ATTRIBUTED BASED ACCESS CONTROL
DATE: MECHANISM BASED ON PARTICULAR SCENARIO

AIM:
The aim is to implement an attribute-based access control mechanism in a specific cloud
scenario to manage and enforce access control based on user attributes.
PROCEDURE:
1. Define attributes: Identify the attributes that are required by the access control policies.
Attributes could user roles, department, location, time of access, or any other relevant
information.
2. Define access control policies: Define the access control policies based on the attributes. For
example, they may have policies that allow users with the “manager” roles in the sales
department to access certain resources.
3. Set up attribute authority: Create an attribute service that can provide attribute values for users.
This service could be a separate component or integrated within the applications.
4. Implement access control checks: Within the cloud application or infrastructure, implement
access control checks based on the user's attributes. These checks will involve querying the
attribute authority to obtain attribute values for the user and comparing them against the access
control policies.
5. Enforce access control: Based on the access control checks, allow or deny access to the
requested resources or functional within the cloud environment.

SOURCE CODE:
The implementation of ABAC is specific to the cloud provider and the programming language
used for the application. Below is an example using python.
class attribute authority
def get _ attribute (self, user _id, attribute _name)
#Implement logic to retrieve attribute value for the user from a data source
#This could involve querying a database, external service, or any other method
attribute _value = none
if attribute _name ==’role’
#Example: Get the user’s role from a database
attribute _values == get _user _role _from _database (user _id)
elif attribute _name == ‘department’
#Example: Get the user department from an external service
attribute _value == get _user _department _from _ external _service (user _id)
#Add more conditions for other attributes as needed
return attribute _value
def check _ access (user _id, resource _id,action)
#Create an instance of the attribute authority
attributes_ authority = Attribute authority ()
#Define access control policies
access _control _policies=[
27
{‘role’; ‘manager’,
‘department':' Sales’,
‘resource': 'Sales _ data’,
‘action':' read’
},{
‘role’:’Admin’,
‘resource’:’admin_panel’,
‘action’:’write’
}
#Add more access control policies as needed
]
#Get attribute values for the user
user _role = attribute. get _ attribute (user _id,’role’)
user _department = attribute _ authority. get _ attribute (user _ id,’department’)
#Check is the user has access control based on the attributes
for policy in access control policies:
if policy [‘resource’]==resource _id and policy[‘action’] == action;
if (policy. Get (‘role’) is none or policy [‘role’] == user _role) and
(Policy. Get (department) is none or policy [‘department’] == user _department):
return true
return false
#Example usage: checking if user with ID ‘user1’ can read the ‘sales _data’ resources
can _ read = check _ access (‘’user1, ‘sales _data’, ’read’)
print (‘user can read ‘, can _read)

OUTPUT:
The output of the script will be Boolean value indicating whether the user with the specified
ID has access to the requested resource and action. In this example it will print whether the user can
read the ‘sales _ data ’resource.

RESULT:
The ABAC mechanisms implemented allows to manage access control based on user
attributes in the cloud application or infrastructure. Users have attributes associated with them, and
access control policies are defined based on these attributes. Access control checks are performed by
querying the attribute authority for attribute values and comparing them against the access control
policies. This allows for fine grained for attribute control over resource access based on user
attributes.

28
EX NO:10 DEVELOP A LOG MONITORING SYSTEM WITH INCIDENT
DATE: MANAGEMENT IN THE CLOUD

AIM:
The aim is to develop a log monitoring system with incident management in the cloud. The
system should monitor logs from various sources, detect anomalies or preferred and generate
incidents for further investigation and resources.
PROCEDURE:
1. Choose a cloud provider: Select a cloud provider that offers logging and monitoring services.
In this example, we will use amazon web services (AWS)such as amazon cloud watch and
aws lambda.
2. Set up log sources: Configure the application or infrastructure to send logs to a centralized
logging service. This could be done by integrating logging libraries, configuring log
forwarders, or using cloud logging services.
3. Configure log monitoring: Setup log monitoring rules in the logging service to detect
anomalies or patterns of interest. This could involve defining metrics, filters, or alarms based
on log data.
4. Configure incident management: Incident management capabilities to handle and track
incidents generated by the log monitoring system. This could be done using incident
management tools or custom flows.
5. Implement incident handling: Define the procedures and workflows for incident handling,
including incident range, assignment, investigation, and resolution. This involves integrating
with incident management tools, sending notifications, or executing automated actions.

SOURCE CODE:
The implementation of a complete log monitoring system with incident management is
beyond the scope of a single source code example. However here an example of a basic AWS lambda
function that can be triggered by log amazon cloud watch and generate an incident:
import boto3
def generate _incident (event, context):
#Extract relevant information from the log event
log_group ==event [‘detail’] [‘log group’]
log_stream == event[‘detail’] [‘log stream’]
log_message == event [‘detail’] [‘message’]
#Perform further processing or anomaly detection based on log data
#.....
#Generate an incident in an incident management system
incident _title =’anomaly detected in log stream: {}’, format(log_stream)
incident_description =’anomaly detected in log group: {} \n log message: {}’,
format_ group, log _message)
#Send the incident details to an incident management system
incident _management _service =boto3.client (‘incident manager’)
incident_managment _service.create_incident (title = incident _ title ,
29
description =incident description,
impact =1 #define the impact level of the incident
urgency= 1 #define the urgency level of the incident
severity =1 #define the severity level of the incident
)

This example demonstrates a basic lambda function that can be triggered by log events in
cloud watch. It extracts relevant information from the log event and generates an incident using the
AWS incident manager service. Further customization and integration with other Incident
management tools may be necessary based on the requirements.

OUTPUT:
The lambda function will be triggered by log events in cloud watch, and it will generate an
incident in the specific incident management system. The output will depend on the incident
management system used and its integration with the lambda function.

RESULT;
The log monitoring system with incident management allows for real time monitoring of logs,
detection of anomalies or preferred patterns and generation of incidents for further investigation and
resolution.

30

You might also like