A starter template for building an MCP server that stores and retrieves information using vector embeddings. This boilerplate provides the foundation for creating your own embedding-based knowledge store that can integrate with Claude or other MCP-compatible AI assistants.
This boilerplate helps you quickly start building:
- A personal knowledge base that remembers information for your AI assistant
- A semantic search interface for your documents or knowledge
- A vector store integration for AI assistants
- Store content with automatically generated embeddings
- Search content using semantic similarity
- Access content through both tools and resources
- Use pre-defined prompts for common operations
This MCP server template connects to vector embedding APIs to:
- Process content and break it into sections
- Generate embeddings for each section
- Store both the content and embeddings in a database
- Enable semantic search using vector similarity
When you search, the system finds the most relevant sections of stored content based on the semantic similarity of your query to the stored embeddings.
# Clone the boilerplate
git clone https://github.com/yourusername/mcp-embedding-storage-boilerplate.git
cd mcp-embedding-storage-boilerplate
# Install dependencies
pnpm install
# Build the project
pnpm run build
# Start the server
pnpm startAfter cloning and building, you'll need to:
- Update the
package.jsonwith your project details - Modify the API integration in
src/to use your preferred embedding service - Customize the tools and resources in
src/index.ts
Add the following configuration to your claude_desktop_config.json file:
{
"mcpServers": {
"your-embedding-storage": {
"command": "node /path/to/your/dist/index.js"
}
}
}Then restart Claude for Desktop to connect to the server.
Stores content with automatically generated embeddings.
Parameters:
content: The content to storepath: Unique identifier path for the contenttype(optional): Content type (e.g., 'markdown')source(optional): Source of the contentparentPath(optional): Path of the parent content (if applicable)
Searches for content using vector similarity.
Parameters:
query: The search querymaxMatches(optional): Maximum number of matches to return
Resource template for searching content.
Example usage: search://machine learning basics
A prompt to help store new content with embeddings.
Parameters:
path: Unique identifier path for the contentcontent: The content to store
A prompt to search for knowledge.
Parameters:
query: The search query
You can integrate this boilerplate with various embedding APIs and vector databases:
- OpenAI Embeddings API
- Hugging Face embedding models
- Chroma, Pinecone, or other vector databases
- Vercel AI SDK
MIT