Using LocalStack for S3 Development
LocalStack is a cloud service emulator that runs in a single container on your
local machine, allowing you to develop and test your AWS applications locally
without connecting to the actual AWS cloud. Here's how to set up and use LocalStack
for S3 development:
1. Install LocalStack
You can run LocalStack using Docker:
bash
CopyInsert in Terminal
docker pull localstack/localstack
2. Start LocalStack
Run LocalStack with S3 service enabled:
bash
CopyInsert
docker run -d --name localstack \
-p 4566:4566 \
-p 4510-4559:4510-4559 \
-e SERVICES=s3 \
-e DEBUG=1 \
-e DATA_DIR=/tmp/localstack/data \
localstack/localstack
LocalStack will be available at http://localhost:4566. This is the endpoint you'll
use for all AWS services.
3. Update Your Environment Variables
Update your .env file to point to LocalStack:
CopyInsert
DOCUMENT_STORAGE_TYPE=s3
AWS_S3_BUCKET_NAME=your-bucket-name
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=test
AWS_SECRET_ACCESS_KEY=test
AWS_S3_ENDPOINT=http://localhost:4566
AWS_S3_FORCE_PATH_STYLE=true
Note: With LocalStack, you can use any values for AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY (like "test" or "dummy").
4. Create an S3 Bucket
Before using S3, you need to create a bucket. You can do this using the AWS CLI:
bash
CopyInsert
# Install AWS CLI if you don't have it
npm install -g aws-cli-js
# Create a bucket
aws --endpoint-url=http://localhost:4566 s3 mb s3://your-bucket-name
Alternatively, you can create the bucket programmatically in your application's
startup code:
javascript
CopyInsert
import { S3Client, CreateBucketCommand } from '@aws-sdk/client-s3';
async function createBucketIfNotExists() {
const s3Client = new S3Client({
endpoint: 'http://localhost:4566',
region: 'us-east-1',
credentials: {
accessKeyId: 'test',
secretAccessKey: 'test',
},
forcePathStyle: true,
});
try {
await s3Client.send(
new CreateBucketCommand({
Bucket: 'your-bucket-name',
})
);
console.log('Bucket created successfully');
} catch (error) {
// Ignore if bucket already exists
if (error.name !== 'BucketAlreadyExists' && error.name !==
'BucketAlreadyOwnedByYou') {
console.error('Error creating bucket:', error);
}
}
}
createBucketIfNotExists();
5. Test S3 Operations
You can test basic S3 operations using the AWS CLI:
bash
CopyInsert
# List buckets
aws --endpoint-url=http://localhost:4566 s3 ls
# Upload a file
aws --endpoint-url=http://localhost:4566 s3 cp test.txt s3://your-bucket-name/
# List objects in a bucket
aws --endpoint-url=http://localhost:4566 s3 ls s3://your-bucket-name/
# Download a file
aws --endpoint-url=http://localhost:4566 s3 cp s3://your-bucket-name/test.txt
downloaded.txt
6. Debugging
If you need to debug S3 operations, you can check the LocalStack logs:
bash
CopyInsert in Terminal
docker logs localstack
7. LocalStack Web Interface
LocalStack also provides a web interface to monitor and manage resources. You can
access it at:
CopyInsert
http://localhost:4566/health
This will show you the status of all services running in LocalStack.
8. Integration with Your Application
Your S3Storage adapter should work with LocalStack without any modifications as
long as you've set the correct environment variables. The key settings are:
AWS_S3_ENDPOINT=http://localhost:4566
AWS_S3_FORCE_PATH_STYLE=true
These tell the AWS SDK to use the LocalStack endpoint instead of the actual AWS
services.
Test S3 implementation
Let's create a simple test script to verify that your S3 implementation works
correctly with LocalStack. First, we need to start LocalStack if it's not already
running:
Cascade error
No credits consumed on this tool call.
Details
I need to create a test file for the S3 implementation. Let me create a test script
in the project directory:
Cascade error
No credits consumed on this tool call.
Details
I apologize for the confusion. It seems I'm having trouble creating a new test file
directly. Let me provide you with a complete test script that you can create
manually to test your S3 implementation with LocalStack.
First, let's make sure LocalStack is running. Run this command in your terminal:
bash
CopyInsert
docker run -d --name localstack \
-p 4566:4566 \
-e SERVICES=s3 \
-e DEBUG=1 \
-e DATA_DIR=/tmp/localstack/data \
localstack/localstack
Now, create a new file called test-s3.js in your project root with the following
content:
javascript
CopyInsert
// test-s3.js
import { S3Storage } from
'./api/v1/modules/documents/documentStorage/s3Storage.js';
import { createReadStream } from 'fs';
import path from 'path';
import { fileURLToPath } from 'url';
// Get the directory name
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
// Configure environment variables for testing
process.env.AWS_S3_BUCKET_NAME = 'test-bucket';
process.env.AWS_REGION = 'us-east-1';
process.env.AWS_ACCESS_KEY_ID = 'test';
process.env.AWS_SECRET_ACCESS_KEY = 'test';
process.env.AWS_S3_ENDPOINT = 'http://localhost:4566';
process.env.AWS_S3_FORCE_PATH_STYLE = 'true';
// Create an instance of S3Storage
const s3Storage = new S3Storage();
// Create a test file
const testFilePath = path.join(__dirname, 'package.json');
const testFile = {
originalname: 'test-file.json',
buffer: Buffer.from(JSON.stringify({ test: 'data' })),
size: Buffer.from(JSON.stringify({ test: 'data' })).length,
mimetype: 'application/json'
};
// Test functions
async function createBucket() {
try {
const { S3Client, CreateBucketCommand, HeadBucketCommand } = await
import('@aws-sdk/client-s3');
const s3Client = new S3Client({
region: process.env.AWS_REGION,
endpoint: process.env.AWS_S3_ENDPOINT,
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
},
forcePathStyle: true,
});
const bucketName = process.env.AWS_S3_BUCKET_NAME;
try {
await s3Client.send(new HeadBucketCommand({ Bucket: bucketName }));
console.log(`Bucket '${bucketName}' already exists`);
} catch (error) {
await s3Client.send(new CreateBucketCommand({ Bucket: bucketName }));
console.log(`Bucket '${bucketName}' created successfully`);
}
} catch (error) {
console.error('Error creating bucket:', error);
}
}
async function testUpload() {
try {
console.log('Testing file upload...');
const result = await s3Storage.uploadFile(
testFile,
'test-document-id',
'test-tenant-id'
);
console.log('Upload successful:', result);
return result;
} catch (error) {
console.error('Upload failed:', error);
}
}
async function testDownload(filePath) {
try {
console.log(`Testing file download from path: ${filePath}...`);
const data = await s3Storage.downloadFile(filePath);
console.log('Download successful. File content:', data.toString());
} catch (error) {
console.error('Download failed:', error);
}
}
async function testFileExists(filePath) {
try {
console.log(`Testing if file exists at path: ${filePath}...`);
const exists = await s3Storage.fileExists(filePath);
console.log('File exists:', exists);
} catch (error) {
console.error('File exists check failed:', error);
}
}
async function testDeleteFile(filePath) {
try {
console.log(`Testing file deletion from path: ${filePath}...`);
const result = await s3Storage.deleteFile(filePath);
console.log('File deletion successful:', result);
} catch (error) {
console.error('File deletion failed:', error);
}
}
async function testGetFileMetadata(filePath) {
try {
console.log(`Testing get file metadata from path: ${filePath}...`);
const metadata = await s3Storage.getFileMetadata(filePath);
console.log('File metadata:', metadata);
} catch (error) {
console.error('Get file metadata failed:', error);
}
}
async function testGeneratePresignedUrl(filePath) {
try {
console.log(`Testing generate presigned URL for path: ${filePath}...`);
const url = await s3Storage.generatePresignedUrl(filePath);
console.log('Presigned URL:', url);
} catch (error) {
console.error('Generate presigned URL failed:', error);
}
}
// Run tests
async function runTests() {
await createBucket();
const uploadResult = await testUpload();
if (uploadResult) {
await testFileExists(uploadResult.path);
await testGetFileMetadata(uploadResult.path);
await testGeneratePresignedUrl(uploadResult.path);
await testDownload(uploadResult.path);
await testDeleteFile(uploadResult.path);
await testFileExists(uploadResult.path);
}
}
runTests().catch(console.error);
To run this test script, you'll need to modify your package.json to include the
following:
json
CopyInsert
"type": "module"
Then, you can run the test with:
bash
CopyInsert in Terminal
node test-s3.js
This script will:
Create a test bucket if it doesn't exist
Upload a test file to S3
Check if the file exists
Get the file's metadata
Generate a presigned URL for the file
Download the file
Delete the file
Verify the file no longer exists
If all tests pass, your S3 implementation is working correctly with LocalStack.
Note: Make sure LocalStack is running before executing the test script. If you
encounter any issues, check the LocalStack logs with:
bash
CopyInsert in Terminal
docker logs localstack
Would you like me to explain any specific part of the test script in more detail?
Feedback submitted
Generating