A clean, stateless AI-powered text transformation tool designed for local AI models and OpenAI-compatible APIs. Each transformation is treated as an independent request with no chat history, optimizing performance for local AI inference.
- 🔄 Stateless Design - Each request sends system prompt + input together, no conversation memory
- 🤖 OpenAI-Compatible API - Works with OpenAI, Ollama, LM Studio, vLLM, and any compatible endpoint
- ⚙️ Flexible Configuration - Configurable base URL, model, API key, temperature, and max tokens
- 📝 Preset Templates - Built-in prompts for email formatting, summarization, bullet points, and more
- 💾 Browser History - Local storage of transformations for easy reuse (up to 100 items)
- 🌙 Dark Mode - Full dark/light/system theme support
- 📋 Copy & Clear Tools - Streamlined workflow for text processing
- 🚀 Fast & Lightweight - React 19 + Vite + Tailwind CSS
- Local AI Users - Optimized for minimal context to maximize local performance
- Text Processing Workflows - Email formatting, content summarization, style transformation
- Privacy-Conscious Users - All data stays local, no external dependencies
- Multi-Model Testing - Easy switching between different AI endpoints
No Node.js required! Just serve the pre-built files:
# Download and extract the repository
git clone https://github.com/IgorWarzocha/stateless-AI-text-transform.git
cd stateless-AI-text-transform
# Serve the built files (choose one):
# Python (most common)
cd dist && python -m http.server 8080
# Node.js
cd dist && npx serve
# Or use any web server pointed to the dist/ folderThen open http://localhost:8080 in your browser.
Requirements: Node.js 18+ and npm
git clone https://github.com/IgorWarzocha/stateless-AI-text-transform.git
cd stateless-AI-text-transform
npm install
npm run build
npm run preview- Click the Settings button in the top-right corner
- Configure your AI endpoint:
Base URL: https://api.openai.com/v1
Model: gpt-4o
API Key: sk-...
Base URL: http://localhost:11434/v1
Model: llama3.2
API Key: (leave empty)
Base URL: http://localhost:1234/v1
Model: (your loaded model name)
API Key: (leave empty)
Base URL: https://your-endpoint.com/v1
Model: your-model-name
API Key: your-api-key (if required)
- Select a Preset or write a custom system prompt
- Paste your text in the input area
- Click Transform - each request is independent
- Copy the output or save it to history
- Access History to reuse previous transformations
Email Formatting:
- Preset: "Format as Email"
- Input: Raw notes or bullet points
- Output: Professional email with proper structure
Content Summarization:
- Preset: "Summarize"
- Input: Long article or document
- Output: Concise summary of key points
Style Transformation:
- Preset: "Make Formal" or "Make Casual"
- Input: Text in one style
- Output: Same content in different tone
- Frontend: React 19 + TypeScript + Vite
- Styling: Tailwind CSS v4 + shadcn/ui design system
- Storage: Browser localStorage for settings and history
- API: OpenAI-compatible REST API calls
- Bundle Size: ~240KB (minified + gzipped)
# Install dependencies
npm install
# Start development server
npm run dev
# Build for production
npm run build
# Preview production build
npm run preview
# Lint code
npm run lintsrc/
├── components/ # React components
│ ├── HistorySidebar.tsx
│ ├── ModeToggle.tsx
│ ├── SettingsModal.tsx
│ └── ThemeProvider.tsx
├── lib/ # Utilities
│ ├── api.ts # OpenAI-compatible API client
│ ├── presets.ts # Built-in prompt templates
│ ├── storage.ts # localStorage helpers
│ └── utils.ts # General utilities
├── types.ts # TypeScript definitions
├── App.tsx # Main application
└── main.tsx # Application entry point
- Fork the repository
- Create a feature branch:
git checkout -b feature-name - Make your changes and test thoroughly
- Submit a pull request
MIT License - feel free to use this project for personal or commercial purposes.
Some local AI servers require CORS headers. Enable CORS in your AI server settings:
Ollama: Automatically handles CORS for browser requests LM Studio: Enable "Allow CORS" in server settings Custom servers: Add appropriate CORS headers
- Verify the base URL is correct and accessible
- Check if the model name matches exactly
- For local models, ensure the AI server is running
- Use the "Test" button in settings to verify connection
- Use lower
max_tokensvalues for faster responses - Reduce
temperaturefor more consistent outputs - Keep system prompts concise for better local AI performance
Built with ❤️ for the AI community. Optimized for local AI inference and privacy-focused workflows.