This is a demo repository showcasing how to build a simple web-based Pong game using the Vibe Compiler (vibec). The Vibe Compiler processes prompt stacks to generate code via LLM integration, and this project demonstrates that workflow with a playable Pong game.
vibe-pong/
├── stacks/
│ └── core/
│ ├── 001_create_pong.md # Initial game setup
│ ├── 002_add_score.md # Add scoring
│ ├── 003_ai_player.md # Add AI opponent
│ └── plugins/
│ └── coding-style.md # Coding style guidelines
├── output/
│ ├── stacks/ # Per-stage outputs
│ └── current/ # Latest game files (index.html, styles.css, game.js)
├── eval/ # Multi-model evaluation results
│ ├── anthropic/ # Claude model evaluations
│ ├── deepseek/ # DeepSeek model evaluations
│ ├── google/ # Gemini model evaluations
│ ├── inception/ # Mercury model evaluations
│ ├── openai/ # GPT model evaluations
│ ├── qwen/ # Qwen model evaluations
│ ├── x-ai/ # Grok model evaluations
│ └── results.log # Evaluation summary
├── eval.sh # Script to run evaluation across multiple LLMs
├── vibec.json # Configuration
├── package.json # Dependencies and scripts
├── yarn.lock # Dependency lock file
└── README.md # This file
- Node.js and npm: Required to run
vibecand install dependencies. - LLM API Key: An API key for an OpenAI-compatible service (e.g., OpenAI, OpenRouter).
- Python 3: Optional, for serving the game locally.
Follow these steps to build the Pong game using vibec:
Get the demo files:
git clone https://github.com/vgrichina/vibe-pong.git
cd vibe-pongInstall vibec and other dependencies:
npm installConfigure the LLM API key as an environment variable (preferred for security):
export VIBEC_API_KEY=your_api_key_hereRun vibec to process the stacks/core/ prompts and generate the game files:
npx vibec- What Happens:
vibecreadsvibec.json, which specifies thecorestack.- It processes each prompt in order:
- Creates basic Pong game (
001_create_pong.md) - Adds scoring system (
002_add_score.md) - Adds AI opponent (
003_ai_player.md)
- Creates basic Pong game (
- Outputs are saved in
output/stacks/(per stage) and merged intooutput/current/(latest version).
Check the generated files:
ls output/current/You should see index.html, styles.css, and game.js.
Serve the game locally using Python's HTTP server (or any static server):
cd output/current
python3 -m http.server 8000Open http://localhost:8000 in your browser to play the Pong game.
The Pong game implements the following features across three development stages:
-
Basic Pong Game (Stage 1)
- Canvas-based game with paddle and ball
- Keyboard controls for player paddle
- Ball physics with wall and paddle collisions
-
Scoring System (Stage 2)
- Score tracking when ball hits paddle
- Score display on screen
- Ball resets when it misses the paddle
-
AI Opponent (Stage 3)
- Computer-controlled opponent paddle
- Two-player gameplay (human vs. computer)
The project includes an evaluation script (eval.sh) that tests how different LLMs perform when generating the Pong game:
# Run evaluation across all supported models
./eval.shThis script:
- Tests multiple LLM models including GPT-4.1, GPT-4o-mini, Claude-3.5-sonnet, Claude-3.7-sonnet, Gemini-2.0-flash, Gemini-2.5-flash-preview, Gemini-2.5-pro-preview, DeepSeek-chat-v3, DeepSeek-r1, Qwen3-235b, Qwen3-30b, Mercury-coder-small-beta, and Grok-3-beta
- Runs each model through the same prompt stacks
- Outputs results to the
eval/directory with model-specific folders - Generates a summary in
eval/results.log
You can use this to benchmark different LLMs and compare their code generation capabilities.
MIT