A real-time multiplayer tic-tac-toe game built with Next.js frontend and WebSocket backend, deployable on AWS with auto-scaling capabilities.
tic-tac-toe/
├── frontend/ # Next.js frontend application
│ ├── src/
│ │ ├── app/ # Next.js App Router pages
│ │ └── components/ # React components
│ ├── public/ # Static assets
│ ├── package.json
│ ├── Dockerfile
│ └── next.config.ts
├── backend/ # WebSocket server
│ ├── server.ts # Main server file
│ ├── package.json
│ ├── Dockerfile
│ └── tsconfig.json
├── terraform/ # AWS infrastructure as code
├── docker-compose.yml # Production Docker setup
├── docker-compose.dev.yml # Development Docker setup
└── deploy.sh # Deployment script
- Real-time Multiplayer: WebSocket-based communication for instant gameplay
- Multiple Grid Sizes: Support for 3x3 and 4x4 game boards
- Score Tracking: Persistent score tracking across games
- Role Management: Player and spectator roles with different permissions
- AWS Deployment: Scalable infrastructure with auto-scaling capabilities
- Docker Support: Containerized for easy deployment and development
-
Clone the repository
git clone <repository-url> cd tic-tac-toe
-
Start with Docker Compose (Recommended)
# Development mode with hot reload docker-compose -f docker-compose.dev.yml up # Production mode docker-compose up
-
Manual Setup
# Backend cd backend npm install npm run dev # Frontend (in a new terminal) cd frontend npm install npm run dev
-
Access the application
- Frontend: http://localhost:3000
- Backend WebSocket: ws://localhost:8080
-
Configure AWS credentials
export AWS_ACCESS_KEY_ID=your_access_key export AWS_SECRET_ACCESS_KEY=your_secret_key export AWS_SESSION_TOKEN=your_session_token # For student accounts
-
Configure deployment
cp terraform/terraform.tfvars.example terraform/terraform.tfvars # Edit terraform.tfvars with your settings -
Deploy to AWS
./deploy.sh
This guide explains how to deploy the real-time multiplayer tic-tac-toe game on AWS with scalable infrastructure.
The application is deployed using:
- AWS ECS Fargate for containerized services
- Application Load Balancer (ALB) for traffic distribution and WebSocket support
- Auto Scaling Groups for elastic scaling
- VPC with public/private subnets across multiple AZs
- Multi-platform Docker images (ARM64 + AMD64) for compatibility
For AWS Student accounts, use LabRole for task execution and task roles to avoid permission errors. The Terraform configuration automatically detects and uses LabRole if available.
For simplicity, make your Docker Hub repositories public. This requires no authentication:
- Push your images to public Docker Hub repositories
- No additional configuration needed in Terraform
The infrastructure automatically scales based on:
- CPU Utilization: Scales when CPU > 70%
- Memory Utilization: Scales when memory > 80%
- Request Count: Scales based on incoming requests
You can configure scaling parameters in terraform/terraform.tfvars:
# Frontend and Backend scaling can be configured independently
# Development Configuration
frontend_min_capacity = 1
frontend_max_capacity = 3
frontend_desired_capacity = 1
backend_min_capacity = 1
backend_max_capacity = 3
backend_desired_capacity = 1
# Production Configuration
frontend_min_capacity = 2
frontend_max_capacity = 10
frontend_desired_capacity = 3
backend_min_capacity = 2
backend_max_capacity = 8
backend_desired_capacity = 2
# High Traffic Configuration
frontend_min_capacity = 3
frontend_max_capacity = 20
frontend_desired_capacity = 5
backend_min_capacity = 3
backend_max_capacity = 15
backend_desired_capacity = 4# Light workload (Student account)
frontend_cpu = 256 # 0.25 vCPU
frontend_memory = 512 # 0.5 GB RAM
backend_cpu = 256
backend_memory = 512
# Medium workload (Student account)
frontend_cpu = 512 # 0.5 vCPU
frontend_memory = 1024 # 1 GB RAM
backend_cpu = 512
backend_memory = 1024
# Heavy workload (Student account)
frontend_cpu = 1024 # 1 vCPU
frontend_memory = 2048 # 2 GB RAM
backend_cpu = 1024
backend_memory = 2048- AWS Student Account with access credentials
- Terraform >= 1.0 installed
- Docker installed and running
- Docker Hub account for image registry
Copy and customize the configuration file:
cp terraform/terraform.tfvars.example terraform/terraform.tfvarsEdit terraform/terraform.tfvars with your:
- AWS credentials (Access Key ID, Secret Access Key, Session Token)
- Docker Hub username
- Scaling parameters (optional)
# Use the automated script for multi-platform builds
./docker-push.shNote: This script automatically builds multi-platform images (ARM64 + AMD64) and pushes to Docker Hub. Make sure your Docker Hub repositories are public or configure authentication as described above.
cd terraform
terraform init
terraform plan
terraform apply -auto-approveTo manually scale your services:
# Scale frontend to 5 instances
aws ecs update-service --cluster tic-tac-toe-cluster \
--service tic-tac-toe-frontend \
--desired-count 5
# Scale backend to 3 instances
aws ecs update-service --cluster tic-tac-toe-cluster \
--service tic-tac-toe-backend \
--desired-count 3Edit terraform/terraform.tfvars and run terraform apply:
Example for high-traffic scenario:
frontend_min_capacity = 3
frontend_max_capacity = 15
frontend_desired_capacity = 5
backend_min_capacity = 2
backend_max_capacity = 10
backend_desired_capacity = 3
cpu_target_value = 60
memory_target_value = 70The system includes multiple scaling policies:
- CPU-based scaling: Targets 70% CPU utilization
- Memory-based scaling: Targets 80% memory utilization
- Request-based scaling: Scales based on request count per target
Monitor your deployment through AWS Console:
- ECS Service CPU/Memory utilization
- ALB request count and response times
- Target group health
- Auto scaling activities
- Services run in private subnets
- ALB in public subnets with security groups
- IAM roles with least privilege (LabRole for student accounts)
- VPC with proper network ACLs
To destroy all resources:
cd terraform
terraform destroy -auto-approveFor issues or questions:
- Check ECS service status via AWS Console
- Review Terraform state
- Verify Docker images
- Check AWS service limits
- 3x3 Grid: Classic tic-tac-toe rules
- 4x4 Grid: First to get 4 in a row (horizontal, vertical, or diagonal) wins
- Real-time: Moves are synchronized instantly between players
- Spectator Mode: Watch games without participating
- Framework: Next.js 15 with App Router
- Styling: Tailwind CSS
- WebSocket: Native WebSocket API for real-time communication
- State Management: React hooks and context
- Runtime: Node.js with TypeScript
- WebSocket: ws library for WebSocket server
- Game Logic: In-memory game state management
- Scalability: Stateless design for horizontal scaling
- Compute: ECS Fargate for containerized services
- Load Balancing: Application Load Balancer
- Networking: VPC with public/private subnets
- Scaling: Auto Scaling Groups with CPU/memory triggers
- Security: Security groups and IAM roles
- Docker Compose for local development and testing
- Hot reload support for both frontend and backend
- Terraform Infrastructure as Code
- Auto-scaling from 1-50+ instances
- Multi-AZ deployment for high availability
- Load balancer for traffic distribution
Frontend:
NEXT_PUBLIC_WS_URL: WebSocket server URLNODE_ENV: Environment (development/production)
Backend:
PORT: Server port (default: 8080)NODE_ENV: Environment (development/production)
See terraform/terraform.tfvars.example for all configuration options including:
- Instance types and scaling limits
- AWS region and availability zones
- Security and networking settings
- Fork the repository
- Create a feature branch
- Make your changes
- Test locally with Docker Compose
- Submit a pull request
This project is licensed under the MIT License.
- Opeyemi Bright Oginni - Group 40