Releases: pinkfuwa/llumen
llumen v0.5.0-pre
🛠 What’s Fixed
- OpenRouter API Compatibility: We’ve addressed a breaking change in the OpenRouter API (#67) that was disrupting text-only model generation. Everything should be back to working smoothly now.
✨ Improvements
- Intuitive Navigation: We've added gesture support for the sidebar. It’s now much easier to swipe or toggle the menu, making the interface feel more fluid, especially on touch devices.
- Enhanced Accessibility: We've made several adjustments under the hood to improve how Llumen works with assistive technologies, ensuring a better experience for everyone.
- Polished File Uploads: The file upload UX has been refined to be more responsive and clear, so adding documents to your chats feels more natural.
- Image Generation Support: You can now use image-only models directly within Llumen. If the model supports it, Llumen will handle the generation process seamlessly.
- Smarter Prompting: We have further tweaked our internal system prompts to ensure the LLM follows instructions more accurately and provides more relevant answers.
Important
This is a pre-release version. While it fixes the critical OpenRouter issue, some features are still being polished as we head toward the stable v0.5.0 release.
Full Changelog: v0.4.2...v0.5.0-pre
llumen v0.4.2
This is a minor update focused on making your chat experience smoother and more reliable.
While we haven't added new features in this version, we've spent time under the hood to improve how Llumen handles text, APIs, and layout.
🛠 What’s Fixed
- Better Markdown Support: We've improved the parser to handle non-standard markdown more gracefully, so your chats look great even when the formatting gets a bit creative.
- Provider Resilience: Some OpenAI-compatible API providers (like naga.ac) occasionally send malformed responses. Llumen now handles these gracefully instead of hitting a snag.
✨ Improvements
- Screen Real Estate: We've shrunk the input bar slightly. This gives you more room for your actual chat content, which is especially nice if you're working on a wide screen.
- Refined System Prompts: By using
promptfoofor testing, we've fine-tuned the system prompt for "Normal Mode" to be more effective and consistent. - Reliable Chat Titles: Generating titles for your conversations is now much more dependable. Whether you're using a top-tier model or a smaller, older one (like
qwen-2.5-7b-instruct), you'll get clear, snappy titles for your history.
Happy to hear you setup. Like what model we should test with promptfoo!
Important
v0.4.2 is not compatible with windows, executable for windows are patched with commit 9f21094.
Full Changelog: v0.4.1...v0.4.2
llumen v0.4.0 - Image Gen + Better Compatibility
This release focuses on bridging the gap between text and visuals, while hardening our integration and compatibility.
🌟 What's New
- Image Generation Support: You can now generate images directly inside the chat interface.
- OpenAI Compatibility: deeply improved handling of standard OpenAI endpoints..
- UI/UX Polish:
- Major improvements to Markdown generation and rendering.
- New theming options for a cleaner aesthetic.
- Optimized mobile layout.
What do you want to see next? Let us know in the discussion!
Full Changelog: v0.3.1...v0.4.0
llumen v0.4.0-alpha
In our fourth release, we've expanded llumen's creative horizons with image generation, smoother file handling, and even wider compatibility for your favorite model providers.
🌟 What's New
- Visual Creativity: Bring your ideas to life with fully integrated image generation support directly within your chat.
- Seamless File Handling: We've overhauled the upload process and added file downloads—your data moves freely now.
- Smart Model Detection: Working with OpenRouter? llumen now automatically detects model capabilities so you don't have to guess configurations.
- Expanded Compatibility: We've smoothed out the edges for standard OpenAI endpoints, ensuring rock-solid performance with providers like GitHub Copilot.
- A Better Look & Feel: Enjoy a refined mobile experience, sharper markdown rendering, and theme improvements across the board.
Dive back in and start creating! Feedback? Reach me out! 🚀
Full Changelog: v0.3.0...v0.4.0-alpha
llumen v0.3.0 - Deep Research
In our third release, we've transformed llumen into a research powerhouse with a blazing-fast engine, native web search, and thoughtful design upgrades.
🌟 What's New
- Deep Research Mode: Go beyond surface-level chats with multi-step investigation and automatic source synthesis.
- Lightning-Fast Rendering: Our new lezer-powered markdown engine slashes frontend size by 70% and streams beautifully in real time.
- General UI Improvement: new Sunset theme delivers better contrast, readability, and mobile experience for everyone.
- Cross-Tab Sync: Seamlessly switch between browser tabs with real-time conversation sync.
- Web Search Integration: Instantly access and summarize information from the web directly within your conversations.
Warning
If you're upgrading from v0.2.x, you need to migrate your data (currently no migration script).
Dive back in and level up your LLM chats. Feedback? Reach me out! 🚀
Full Changelog: v0.2.0...v0.3.0
llumen v0.2.0 - File upload and mobile support
In our second release of llumen, we've supercharged the app with robust file handling, mobile optimization, and performance tweaks that make chatting smoother and more versatile.
🌟 What's New
- Versatile File Uploads: Seamlessly handle images, PDFs, documents, audio, and more. Bring your files into the conversation for OCR, analysis.
- Mobile-First Magic: Responsive design now shines on phones. Chat fluently on the move without sacrificing features or speed.
- Frontend Performance Boost: Heavy rendering tasks like Markdown and media processing are offloaded to web workers.
- Backend Efficiency Gains: Streaming for large files and chunks slashes memory usage, letting you tackle bigger uploads without breaking a sweat.
- Bug Squashed: Fixed the resumed stream issue that skipped last control token.
Warning
If you're upgrading from v0.1.x, you need to migrate your data from redb v2 to redb v3. Backup your blobs.redb file first.
Dive back in and level up your LLM chats. Feedback? Reach me out! 🚀
Full Changelog: v0.1.1...v0.2.0
llumen v0.1.1 - Ignite LLM Chats with Simplicity! 🚀
Welcome to the initial release of llumen!
Built with Rust for the backend and SvelteKit for the frontend, llumen starts in under 1 second and sips less than 10 MiB of disk space. Dive in, chat with models, and explore modes like web-search-enabled conversations—all out of the box.
🌟 Key Features
- Single API Key Magic: Just plug in your OpenRouter key for full LLM access—no extras needed for search, OCR, embeddings, or more.
- Blazing Fast & Lean: Startup in <1s, tiny footprint (<10 MiB).
-
Rich Chat Experience: Markdown rendering with code blocks and math support (
$E=mc^2$ looks crisp!). Multiple modes: normal chats, web-search enabled, and upcoming deep-research/agentic features (WIP 🚧). - Cross-Platform Ready: Windows 🪟 executables, Linux binaries, and Docker 🐳 images for seamless setup.
📝 Changelog
- Initial release: Core backend (Rust) and frontend (SvelteKit) integration.
- Full LLM chat functionality with OpenRouter support(File upload/image upload/search).
- Markdown rendering, multi-mode chats, and static/distroless docker builds.
- Screenshots and docs for easy onboarding.