A lightweight, production-ready memory system that stores conversations and retrieves relevant context using OpenAI embeddings. Perfect for building chatbots with long-term memory.
- 💾 Persistent Storage: SQLite database for reliable conversation storage
- 🧠 Vector Embeddings: OpenAI embeddings for semantic similarity search
- 🔍 Smart Retrieval: Top-k memory retrieval based on context relevance
- 💬 Chatbot Integration: Ready-to-use CLI chatbot interface
- 🎯 Simple API: Clean, intuitive Python API
- 🚀 Production Ready: Proper error handling, logging, and type hints
# Clone the repository
git clone <your-repo-url>
cd mini-memori
# Install in development mode
pip install -e .- Python 3.8+
- OpenAI API key
Create a .env file in the project root:
OPENAI_API_KEY=your_api_key_hereOr set the environment variable:
export OPENAI_API_KEY=your_api_key_here# Run quick demo to see it in action
python demo.py
# Or verify your installation
python verify_installation.pyfrom mini_memori import MemoryEngine
# Initialize the engine
engine = MemoryEngine(db_path="memories.db")
# Save a message
engine.save_message(
role="user",
content="My favorite color is blue",
conversation_id="conv_1"
)
# Retrieve relevant memories
memories = engine.retrieve_memories(
query="What is my favorite color?",
top_k=5
)
for memory in memories:
print(f"{memory['role']}: {memory['content']}")# Start the interactive chatbot
python -m mini_memori.chatbot
# Or use the convenience command
mini-memori-chatInitialize the memory engine.
Parameters:
db_path: Path to SQLite database fileembedding_model: OpenAI embedding model to use
save_message(role: str, content: str, conversation_id: str = "default", metadata: dict = None) -> int
Save a message with its embedding to the database.
Parameters:
role: Message role (e.g., "user", "assistant", "system")content: Message contentconversation_id: Conversation identifiermetadata: Optional metadata dictionary
Returns: Message ID
retrieve_memories(query: str, top_k: int = 5, conversation_id: str = None, threshold: float = 0.0) -> List[dict]
Retrieve the most relevant memories based on semantic similarity.
Parameters:
query: Search querytop_k: Number of results to returnconversation_id: Optional conversation filterthreshold: Minimum similarity threshold (0-1)
Returns: List of memory dictionaries with similarity scores
Get recent messages from a specific conversation.
Parameters:
conversation_id: Conversation identifierlimit: Maximum number of messages to return
Returns: List of messages ordered by timestamp
Delete all messages from a conversation.
Parameters:
conversation_id: Conversation identifier
Returns: Number of messages deleted
Get database statistics.
Returns: Dictionary with total messages, conversations, and date ranges
mini_memori/
├── __init__.py # Package initialization
├── engine.py # Core MemoryEngine class
├── database.py # Database schema and operations
├── embeddings.py # OpenAI embeddings integration
├── chatbot.py # Interactive chatbot interface
├── config.py # Configuration management
└── utils.py # Utility functions
- Message Storage: When you save a message, it's stored in SQLite with a timestamp
- Embedding Generation: OpenAI's API generates a vector embedding for the content
- Similarity Search: Retrieval uses cosine similarity to find the most relevant memories
- Context Assembly: Top-k memories are returned with similarity scores
- Personal AI Assistants: Build chatbots that remember user preferences
- Knowledge Management: Store and retrieve information semantically
- Conversation Analysis: Track and analyze conversation patterns
- Context-Aware Systems: Provide relevant historical context to LLMs
See the examples/ directory for more usage examples:
basic_usage.py: Simple save and retrieve operationschatbot_demo.py: Custom chatbot implementationbatch_import.py: Importing existing conversationsmemory_search.py: Advanced search techniques
# Run all tests
pytest tests/
# Run with coverage
pytest --cov=mini_memori tests/Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- OpenAI for the embeddings API
- SQLite for the reliable database engine
Project Link: https://github.com/rar-file/mini-memori
Built with ❤️ for the AI community