CLOUD COMPUTING LAB MANUAL
[Link] AWS CloudShell and the AWS Cloud9 IDE.
Part A
Step 1: Log in to AWS Management Console
1. Go to AWS Console.
2. Navigate to EC2 Dashboard.
Step 2: Launch an EC2 Instance
1. Click on "Launch Instance".
2. Enter the instance name.
3. Choose an Amazon Machine Image (AMI) (e.g., Amazon Linux, Ubuntu).
4. Select an Instance Type (e.g., [Link] for free tier).
5. Configure key pair (download and keep it safe).
6. Configure network settings (allow SSH, HTTP, etc.).
7. Click Launch Instance.
Step 3: Connect to EC2 Instance
1. Go to Instances in EC2 Dashboard.
2. Copy the Public IP of your instance.
Open a terminal and run:
css
CopyEdit
ssh -i [Link] ec2-user@your-public-ip
3. (For Ubuntu, use ubuntu instead of ec2-user)
Step 4: Verify Web Server (Optional)
Install Apache/Nginx:
pgsql
CopyEdit
sudo yum install httpd -y
sudo systemctl start httpd
sudo systemctl enable httpd
1.
Create a simple HTML file in /var/www/html/[Link]:
html
CopyEdit
<h1>EC2 Web Server is Working!</h1>
2.
3. Open the public IP in a browser to check.
Step 5: Pricing Check
1. Go to EC2 Dashboard > Instances.
2. Click on the Instance ID to see billing and details.
Part B
Step 1: Access AWS CloudShell
1. Sign in to the AWS Management Console.
2. Click on CloudShell in the AWS services menu.
3. Wait for the CloudShell environment to initialize.
Step 2: Explore CloudShell
1. Run basic commands to check the environment.
Verify AWS CLI is available by running:
Css CopyEdit
aws --version
2. Navigate and check available directories.
Step 3: Create an EC2 Instance Using CloudShell
Use the AWS CLI to launch an instance:
Css CopyEdit
aws ec2 run-instances --image-id ami-xxxxxxxx --count 1 --instance-type [Link] --key-name
MyKeyPair --security-groups MySecurityGroup
1. Note down the Instance ID from the response.
Step 4: Upload a File to CloudShell
1. Click Upload File in CloudShell.
2. Select a file from your system to upload.
Step 5: Work with Multiple Environments
1. Split CloudShell into multiple sessions if needed.
2. Use different sessions for different tasks.
Step 6: Download a File from CloudShell
1. Use the Download option to save files locally.
Step 7: Remove and Upload a File with Content
Delete a file using:
bash
CopyEdit
rm filename
1. Create a new file and add content:
Bash CopyEdit
echo "Hello, CloudShell!" > [Link]
Step 8: Create a Virtual Private Cloud (VPC)
Since CloudShell doesn’t support upload/download, use AWS CLI:
lua
CopyEdit
aws ec2 create-vpc --cidr-block [Link]/16
1.
Step 9: Delete CloudShell
1. Click on Actions in the top-right corner.
2. Select Delete CloudShell.
[Link] with Amazon S3 Orchestrating Serverless Functions with AWS Step Functions
Step 1: Create an S3 Bucket
1. Sign in to AWS Management Console.
2. Navigate to S3.
3. Click Create Bucket.
4. Enter a unique bucket name.
5. Choose a region and keep default settings.
6. Click Create.
Step 2: Upload Files to S3
1. Open your S3 bucket.
2. Click Upload.
3. Select an HTML file (for website hosting) or other files.
4. Click Upload to complete the process.
Step 3: Enable Versioning (Optional)
1. If you modify and reupload a file, S3 stores versions.
2. You can track changes under the Versions tab.
Step 4: Configure Static Website Hosting
1. Go to your S3 bucket.
2. Click on Properties.
3. Scroll to Static website hosting.
4. Click Edit.
5. Enable the option.
6. Enter the index document (e.g., [Link]).
7. Click Save.
Step 5: Make the Bucket Public
1. Navigate to the Permissions tab.
2. Click on Bucket Policy.
Add the following JSON policy:
json
CopyEdit
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s[Link]<Bucket-Name>/*"
]
}
]
}
3.
4. Replace <Bucket-Name> with your actual bucket name.
5. Click Save.
Step 6: Access Your Hosted Website
1. Go back to Properties.
2. Scroll down to Static Website Hosting.
3. Copy the website URL.
4. Paste it into a browser to see your website live.
[Link] with Amazon DynamoDB
Step 1: Create a DynamoDB Table
1. Sign in to AWS Management Console.
2. Navigate to DynamoDB.
3. Click Create Table.
4. Enter a Table Name (e.g., StudentData).
5. Set a Partition Key (Primary Key), e.g., USN (Unique Student Number).
6. Choose DynamoDB Standard-IA for cost efficiency.
7. Click Create.
Step 2: Insert Items into the Table
1. After table creation, select your table.
2. Click Explore Table Items.
3. Click Create Item.
Choose JSON format and enter:
json
CopyEdit
{
"USN": { "S": "03" },
"NAME": { "S": "XYZ" },
"DEPT": { "S": "MECH" },
"AGE": { "N": "13" }
4. Click Save.
Step 3: Update an Item
1. Go to Explore Table Items.
2. Select an item to update.
3. Modify the JSON data or values.
4. Click Save.
Step 4: Query the Table
1. Go to the Query tab.
2. Enter a Partition Key value (e.g., USN: "03").
3. Click Run to fetch the item.
Step 5: Delete an Item
1. Select an item in the table.
2. Click Delete.
3. Confirm the deletion.
Step 6: Delete the Table (If Needed)
1. Go to Tables.
2. Select your table.
3. Click Delete Table.
[Link] REST APIs with Amazon API Gateway
Step 1: Create an AWS Lambda Function
1. Sign in to the AWS Management Console.
2. Go to AWS Lambda.
3. Click Create Function.
4. Choose Author from scratch.
5. Enter a Function Name.
6. Select a Runtime (e.g., Python, [Link]).
7. Click Create Function.
Step 2: Configure Lambda Function
Write your Lambda function code:
python
CopyEdit
import json
def lambda_handler(event, context):
return {
"statusCode": 200,
"body": [Link]({"message": "Hello from Lambda!"})
1. Click Deploy.
2. Copy the Function URL and test it in a browser.
Step 3: Go to API Gateway
1. Open AWS API Gateway in the AWS Console.
2. Click Create API.
3. Select REST API.
4. Click Build.
Step 4: Create an API and Method
1. Click Create Resource and name it (e.g., /hello).
2. Click Create Method.
3. Select GET as the HTTP method.
4. Choose Lambda Function as the integration type.
5. Enter your Lambda Function Name.
6. Click Save and Deploy API.
Step 5: Test the API
1. Copy the API Endpoint URL.
2. Open a new browser tab and paste the URL.
3. Press Enter to see the response.
[Link] Lambda Functions Using the AWS SDK for Python
Step 1: Sign in to AWS Console
1. Go to the AWS Management Console.
2. Navigate to AWS Lambda.
Step 2: Create a Lambda Function
1. Click Create Function.
2. Choose Author from Scratch.
3. Enter a Function Name.
4. Select a Runtime (e.g., Python, [Link]).
5. Click Create Function.
Step 3: Configure the Function
1. Open the function.
2. Scroll to Configuration.
3. Enable Function URL (if required).
4. Click Deploy.
Step 4: Test the Function
1. Click Test.
2. Create a new test event.
3. Run the function and check the output.
Step 5: Verify the Function URL
1. Copy the Function URL.
2. Paste it into a browser.
3. If configured correctly, you should see a response.
[Link] a Web Application to Docker Containers
Step 1: Launch an EC2 Instance
1. Go to the AWS Management Console.
2. Navigate to EC2 Dashboard and click Launch Instance.
3. Choose a Linux Machine (Amazon Linux or Ubuntu).
4. Enable HTTP and HTTPS in security settings.
5. Keep other settings as default and launch the instance.
Step 2: Connect to the EC2 Instance
1. Go to the EC2 Dashboard.
2. Click on your instance and select Connect.
Use SSH to connect:
css
CopyEdit
ssh -i [Link] ec2-user@your-public-ip
Step 3: Install and Start Docker
Update the system:
sql
CopyEdit
sudo yum update -y
1. Install Docker:
nginx
CopyEdit
sudo yum install docker -y
2. Start the Docker service:
sql
CopyEdit
sudo service docker start
3. Check Docker status:
lua
CopyEdit
sudo service docker status
4. Switch to root user:
nginx
CopyEdit
sudo su
Step 4: Pull and Run a Docker Container
Verify Docker installation:
nginx
CopyEdit
docker version
1. Pull the Nginx Docker image:
nginx
CopyEdit
docker pull nginx
2. List available Docker images:
nginx
CopyEdit
docker images
3. Run the Nginx container:
arduino
CopyEdit
docker run -d -p 80:80 nginx
4. Check running containers:
nginx
CopyEdit
docker ps
Step 5: Access the Web Application
1. Go to EC2 Dashboard and select your instance.
2. Copy the Public IP.
3. Open a web browser and paste the Public IP.
4. If successful, the Nginx welcome page will be displayed.
[Link] Application Data with ElastiCache, Caching with Amazon CloudFronT, Caching
Strategies
Step 1: Create an Elastic Cache Redis Cluster
1. Go to AWS Management Console.
2. Navigate to ElastiCache from the left panel.
3. Select Redis OSS Cache.
4. Click Continue to Redis OSS.
Step 2: Configure Redis Cluster
1. DO NOT enable Multi-AZ.
2. Select Engine Version: 7.0.
3. Set Port Number: 6379.
4. Choose Node Type: [Link].
5. Set Shards: 2.
6. Provide a Redis Subnet Name and manage the resource.
Step 3: Configure Authentication
1. Select AUTH DEFAULT ACCESS.
2. Set Auth Token: AUTHtoken123456789.
Step 4: Create a Security Group in EC2
1. Open a new tab and go to EC2 Dashboard.
2. Click Security Groups > Create Security Group.
3. Add Inbound Rule:
○ Type: Custom TCP
○ Port: 6379
○ Source: Anywhere ([Link]/0 for IPv4)
Step 5: Attach Security Group to ElastiCache
1. Go back to ElastiCache.
2. Select your Redis Cluster.
3. Click Manage and Refresh.
4. Select the Security Group created in EC2.
Step 6: Finalize and Create Redis Cache
1. Keep the default settings on the next page.
2. Click Create to deploy Redis Cache.
[Link] CloudFront for Caching and Application Security
Step 1: Set Up Amazon CloudFront for Caching
1. Go to AWS Management Console.
2. Navigate to CloudFront.
3. Click Create Distribution.
4. Under Origin, enter the S3 bucket or EC2 instance URL.
5. Choose Origin Protocol Policy (HTTP/HTTPS).
6. Set Caching Policy (e.g., Caching Optimized).
7. Click Create Distribution.
8. Copy the CloudFront URL and use it to access cached content.
Step 2: Set Up ElastiCache for Caching Data
1. Go to AWS Management Console > ElastiCache.
2. Click Create Cluster.
3. Choose Redis or Memcached.
4. Configure:
○ Engine Version: 7.0 (for Redis)
○ Port: 6379
○ Node Type: [Link]
○ Shards: 2
5. Click Create.
Step 3: Connect an Application to ElastiCache
For Redis:
python
CopyEdit
import redis
client = [Link](
host='<Redis Endpoint>',
port=6379,
password='<YourPassword>',
decode_responses=True
)
[Link]("key", "value")
print([Link]("key"))
1.
For Memcached:
python
CopyEdit
import pylibmc
client = [Link](["<Memcached Endpoint>"], binary=True)
[Link]("key", "value")
print([Link]("key"))
2.
Step 4: Implement Caching Strategies
1. Write-Through Caching: Data is written to the cache and database at the same time.
2. Lazy Loading: Data is cached only when requested.
3. Time-to-Live (TTL): Expire cache entries after a set time.
4. Cache Invalidation: Remove outdated cache data.
[Link] Serverless Functions with AWS Step Functions
Step 1: Create an SNS Topic
1. Go to AWS SNS Console.
2. Click Create Topic.
3. Choose Standard as the topic type.
4. Enter a Topic Name (e.g., StudentSelectionTopic).
5. Click Create Topic.
6. Click Create Subscription.
7. Enter your email address and confirm the subscription.
Step 2: Create a Step Function
1. Go to AWS Step Functions Console.
2. Click Create State Machine.
3. Select Author from Scratch.
4. Enter a State Machine Name (e.g., course_selection_state_machine).
Copy and paste the following JSON code:
json
CopyEdit
{
"Comment": "An example of the Amazon States Language for scheduling a task.",
"StartAt": "StartHere",
"States": {
"StartHere": {
"Type": "Pass",
"Next": "SubjectChoice"
},
"SubjectChoice": {
"Type": "Choice",
"Choices": [
{
"Variable": "$.Subject",
"StringEquals": "Physics",
"Next": "Physics"
},
{
"Variable": "$.Subject",
"StringEquals": "Maths",
"Next": "Maths"
}
],
"Default": "EndState"
},
"Physics": {
"Type": "Pass",
"Next": "CheckMarks"
},
"Maths": {
"Type": "Pass",
"Next": "CheckMarks"
},
"CheckMarks": {
"Type": "Choice",
"Choices": [
{
"Variable": "$.Marks",
"NumericGreaterThan": 70,
"Next": "EndState"
}
],
"Default": "EndState"
},
"EndState": {
"Type": "Pass",
"End": true
}
}
}
5. Click Next and then Create.
Step 3: Execute the State Machine
1. Click Start Execution.
Enter the following Execution Input:
json
CopyEdit
{
"Subject": "Maths",
"Marks": 76
}
OR
json
CopyEdit
{
"Subject": "Physics",
"Marks": 91
}
2.
3. Click Start Execution.
4. Observe the graph view as each step turns green.
Step 4: Cleanup
1. After execution, delete the Step Function if not needed.
[Link] Application Deployment Using a CI/CD Pipeline
Step 1: Create an IAM Role for EC2 and AWS CodeDeploy
1. Go to AWS IAM Console.
2. Navigate to Roles and click Create Role.
3. Select AWS Service as the trusted entity type.
4. Choose EC2 as the use case.
5. Click Next and add the permission AmazonS3ReadOnlyAccess.
6. Provide a role name and click Create Role.
Step 2: Create an IAM Role for AWS CodeDeploy
1. Repeat Step 1 but select CodeDeploy as the use case.
2. Attach AWSCodeDeployRole permissions.
3. Name the role and click Create Role.
Step 3: Launch an EC2 Instance
1. Go to EC2 Dashboard and launch an instance.
2. Select Amazon Linux as the AMI.
3. Enable SSH, HTTP, and HTTPS.
4. Click Launch Instance.
Step 4: Connect to EC2 and Install Dependencies
Connect to EC2 via SSH:
css
CopyEdit
ssh -i [Link] ec2-user@your-public-ip
1. Switch to root user:
nginx
CopyEdit
sudo su
2. Update system packages:
sql
CopyEdit
sudo yum update -y
3. Install Ruby and Wget:
nginx
CopyEdit
sudo yum install ruby -y
sudo yum install wget -y
Step 5: Install AWS CodeDeploy Agent
Download the installation script:
bash
CopyEdit
wget [Link]
1. Make the script executable:
bash
CopyEdit
chmod +x ./install
2. Install CodeDeploy Agent:
arduino
CopyEdit
sudo ./install auto
3. Check the status:
lua
CopyEdit
sudo service codedeploy-agent status
Step 6: Attach IAM Role to EC2
1. Go to EC2 Dashboard > Select the Instance.
2. Click Actions > Security > Modify IAM Role.
3. Attach the previously created IAM Role.
Step 7: Configure Security Group
1. Go to EC2 Dashboard.
2. Click on Security Groups.
3. Edit Inbound Rules:
○ Type: Custom TCP
○ Port: 4000 (or any required port)
○ Source: Anywhere ([Link]/0 for IPv4)
4. Click Save Rules.
Step 8: Create a CodePipeline
1. Go to AWS CodePipeline and click Create Pipeline.
2. Enter a Pipeline Name.
3. Choose GitHub as the source.
4. Click Connect to GitHub and authenticate.
Step 9: Configure CodeBuild (Optional)
1. If required, create a Build Project.
2. Add a [Link] file in your project root.
Step 10: Create a Deployment Group
1. Go to AWS CodeDeploy.
2. Click Create Application.
3. Create a Deployment Group.
4. Select EC2 Instances and provide key-value tags.
5. Uncheck the Load Balancer option.
Step 11: Add Deploy Stage
1. Return to CodePipeline.
2. Select the previously created Application Name and Deployment Group.
Step 12: Review and Create Pipeline
1. Review all settings.
2. Click Create Pipeline.
3. The CI/CD pipeline is now set up and running.