A sleek, modern web interface for generating AI images locally on your machine. Supports multiple backends including native Diffusers, Stable Diffusion WebUI, Ollama, and ComfyUI. Built with performance and ease of use in mind.
- Multiple Backends: Native Diffusers (GPU accelerated), SD WebUI, Ollama, ComfyUI
- Live Preview: Watch your images evolve step-by-step during generation
- GPU Acceleration: Optimized with CUDA support for fast generation
- Concurrent Generation: Queue multiple images at once
- Modern UI: Clean, responsive interface with glassmorphic design
- Dark Theme: Easy on the eyes with smooth animations
- Generation History: Automatically saves all generated images
- Full Gallery: Browse all your creations in a beautiful grid view
- Quick Actions: Download, copy, and fullscreen with one click
- Keyboard Shortcuts: Ctrl+Enter to generate, Escape to close lightbox
- Python 3.8 or higher
- NVIDIA GPU (recommended for best performance)
- CUDA Toolkit (for GPU acceleration)
- Clone the repository:
git clone https://github.com/rar-file/OllamaForge.git
cd OllamaForge- Create and activate a virtual environment:
python -m venv .venv
.venv\Scripts\activate # Windows
source .venv/bin/activate # Linux/Mac- Install dependencies:
pip install -r requirements.txt- Start the server:
python server.py- Open your browser and navigate to:
http://localhost:6010
- Select your preferred backend and start generating!
- Fastest: Optimized for local GPU acceleration
- Models: dreamshaper-8, Arcane-Diffusion, stable-diffusion-v1-5, and more
- Features: Live preview, model warmup, xFormers optimization
- Endpoint:
http://localhost:7860 - Features: Advanced samplers, negative prompts, full SD WebUI features
- Endpoint:
http://localhost:11434 - Note: Requires MLX support (Mac only)
- Endpoint:
http://localhost:8188 - Features: Node-based workflow support
Clean interface with all controls on the left, generated image on the right with live preview updates.
Browse all your generated images in a beautiful grid layout.
Full-screen image viewing with download and copy options.
The server automatically detects and uses CUDA if available. Models are pre-loaded on startup for instant first generation.
Edit server.py to add your favorite models:
MODELS = {
"custom-model": {
"repo": "author/model-name",
"name": "Custom Model",
"type": "Stable Diffusion"
}
}Change the default port (6010) in server.py:
PORT = 8080 # Your preferred portCtrl + Enter- Generate imageEscape- Close lightbox/modal- Click outside modal - Close modal
ollamaforge/
├── server.py # Python backend server
├── index.html # Frontend interface
├── history/ # Generated images storage
├── requirements.txt # Python dependencies
├── .gitignore # Git ignore rules
└── README.md # This file
Images update every 3 diffusion steps, showing the generation progress in real-time.
Generate multiple images concurrently. Queue status shown with progress bars.
Models are kept in memory for instant subsequent generations.
- xFormers for memory-efficient attention
- VAE slicing for large images
- Float16 precision for speed
- Eval mode for inference
Make sure at least one backend is running:
- For native: Dependencies installed correctly
- For SD WebUI: Server running on port 7860
- For Ollama: Server running on port 11434
- Ensure CUDA is available: Check
torch.cuda.is_available() - Use lower resolution or fewer steps
- Close other GPU-intensive applications
- Reduce image resolution
- Lower batch size to 1
- Enable VAE slicing (already enabled by default)
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
- Stable Diffusion - The foundation model
- Diffusers - Hugging Face's diffusion library
- AUTOMATIC1111 - SD WebUI
- All the amazing model creators on Hugging Face
For issues, questions, or suggestions, please open an issue on GitHub.
Made for the AI art community
