-
-
Notifications
You must be signed in to change notification settings - Fork 69k
Feature Request: memory-lancedb plugin should support custom embedding providers (like core memory does) #8118
Description
Current Behavior
The memory-lancedb plugin currently only supports OpenAI's official embedding models (text-embedding-3-small, text-embedding-3-large). It hardcodes the OpenAI client without allowing customization of the baseUrl or model selection.
From extensions/memory-lancedb/config.ts:
model: {
type: "string",
enum: ["text-embedding-3-small", "text-embedding-3-large"]
}From extensions/memory-lancedb/index.ts:
this.client = new OpenAI({ apiKey }); // No baseUrl customizationDesired Behavior
The memory-lancedb plugin should support the same flexible embedding provider configuration as the core memory system (src/memory/embeddings.ts), which already supports:
- Custom
baseUrl- for OpenAI-compatible APIs (Ollama, vLLM, etc.) - Custom model names - not restricted to OpenAI's enum
- Multiple providers -
openai,local,gemini,auto
Core Memory Already Supports This
The core memory system (src/memory/embeddings-openai.ts) already implements this:
const baseUrl = remoteBaseUrl || providerConfig?.baseUrl?.trim() || DEFAULT_OPENAI_BASE_URL;
// Allows any OpenAI-compatible endpoint including Ollama's /v1/ APIConfiguration example that works with core memory:
{
"memorySearch": {
"provider": "openai",
"remote": {
"baseUrl": "http://192.168.1.3:11434/v1/",
"apiKey": "ollama-local"
},
"model": "qwen3-embedding:0.6b-q8_0-1k"
}
}Proposed Solution
Update memory-lancedb plugin to:
- Add
baseUrloption to the embedding config schema - Remove the enum restriction on model names (allow any string)
- Use the same
fetch-based approach asembeddings-openai.tsinstead of theopenaiSDK, OR passbaseUrlto the OpenAI client - Support the same
remoteconfiguration structure as core memory for consistency
Benefits
- Users can use local embedding models via Ollama/vLLM (privacy, cost savings)
- Consistent configuration across core memory and lancedb plugin
- No vendor lock-in to OpenAI
- Aligns with OpenClaw's philosophy of supporting local/self-hosted options
Additional Context
The memory-lancedb plugin provides valuable auto-recall and auto-capture features that are not available in core memory. However, the lack of custom embedding provider support forces users to choose between:
- Using core memory with local embeddings (but no auto-recall/capture)
- Using lancedb with auto-recall/capture (but forced to use OpenAI API)
This feature would allow users to have the best of both worlds.
Related files:
extensions/memory-lancedb/config.ts- config schemaextensions/memory-lancedb/index.ts- Embeddings classsrc/memory/embeddings.ts- core memory provider selectionsrc/memory/embeddings-openai.ts- core memory OpenAI implementation with baseUrl support