Skip to content

Feature Request: memory-lancedb plugin should support custom embedding providers (like core memory does) #8118

@sebrinass

Description

@sebrinass

Current Behavior

The memory-lancedb plugin currently only supports OpenAI's official embedding models (text-embedding-3-small, text-embedding-3-large). It hardcodes the OpenAI client without allowing customization of the baseUrl or model selection.

From extensions/memory-lancedb/config.ts:

model: {
  type: "string",
  enum: ["text-embedding-3-small", "text-embedding-3-large"]
}

From extensions/memory-lancedb/index.ts:

this.client = new OpenAI({ apiKey });  // No baseUrl customization

Desired Behavior

The memory-lancedb plugin should support the same flexible embedding provider configuration as the core memory system (src/memory/embeddings.ts), which already supports:

  1. Custom baseUrl - for OpenAI-compatible APIs (Ollama, vLLM, etc.)
  2. Custom model names - not restricted to OpenAI's enum
  3. Multiple providers - openai, local, gemini, auto

Core Memory Already Supports This

The core memory system (src/memory/embeddings-openai.ts) already implements this:

const baseUrl = remoteBaseUrl || providerConfig?.baseUrl?.trim() || DEFAULT_OPENAI_BASE_URL;
// Allows any OpenAI-compatible endpoint including Ollama's /v1/ API

Configuration example that works with core memory:

{
  "memorySearch": {
    "provider": "openai",
    "remote": {
      "baseUrl": "http://192.168.1.3:11434/v1/",
      "apiKey": "ollama-local"
    },
    "model": "qwen3-embedding:0.6b-q8_0-1k"
  }
}

Proposed Solution

Update memory-lancedb plugin to:

  1. Add baseUrl option to the embedding config schema
  2. Remove the enum restriction on model names (allow any string)
  3. Use the same fetch-based approach as embeddings-openai.ts instead of the openai SDK, OR pass baseUrl to the OpenAI client
  4. Support the same remote configuration structure as core memory for consistency

Benefits

  • Users can use local embedding models via Ollama/vLLM (privacy, cost savings)
  • Consistent configuration across core memory and lancedb plugin
  • No vendor lock-in to OpenAI
  • Aligns with OpenClaw's philosophy of supporting local/self-hosted options

Additional Context

The memory-lancedb plugin provides valuable auto-recall and auto-capture features that are not available in core memory. However, the lack of custom embedding provider support forces users to choose between:

  • Using core memory with local embeddings (but no auto-recall/capture)
  • Using lancedb with auto-recall/capture (but forced to use OpenAI API)

This feature would allow users to have the best of both worlds.


Related files:

  • extensions/memory-lancedb/config.ts - config schema
  • extensions/memory-lancedb/index.ts - Embeddings class
  • src/memory/embeddings.ts - core memory provider selection
  • src/memory/embeddings-openai.ts - core memory OpenAI implementation with baseUrl support

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions