A .NET Core 9 RAG (Retrieval-Augmented Generation) console application that uses Ollama for embeddings and chat capabilities. This application allows you to ask questions about your PDF documents with AI-powered responses.
- PDF document processing and chunking
- Embedding generation using Ollama's embedding models
- Simple vector store for semantic search
- RAG integration with Ollama 3.2 for chat responses
- Simple console-based chat interface
- .NET 9.0 SDK
- Ollama 3.2 installed and running locally
- PDF documents to query
- Install Ollama 3.2 following the instructions at https://ollama.ai/
- Pull the required models:
ollama pull nomic-embed-text
ollama pull llama3:8b
- Start the Ollama server:
ollama serve
- Build the application:
dotnet build
- Run the application:
dotnet run
- Place your PDF documents in the
Documentsfolder - Launch the application
- The application will automatically process all PDFs in the Documents folder
- Type your questions and get AI-powered responses
- Type 'exit' to quit the application
- Document Processing: The application reads PDF files from the Documents folder, extracts text, and chunks it into smaller pieces.
- Vector Embedding: Each text chunk is converted into a vector embedding using Ollama's embedding model.
- Retrieval: When you ask a question, the application finds the most relevant document chunks by computing cosine similarity between your question and the document embeddings.
- Answer Generation: The relevant chunks are sent to Ollama's LLM along with your question to generate a contextually informed answer.
Program.cs: Entry point and dependency injection setupServices/: Contains all service implementationsIChatService.cs: Interface for chat servicesOllamaChatService.cs: Implementation for Ollama chatIDocumentProcessor.cs: Interface for document processingPdfDocumentProcessor.cs: PDF document processor implementationIEmbeddingService.cs: Interface for embedding servicesOllamaEmbeddingService.cs: Ollama embedding service implementationIVectorStore.cs: Interface for vector storageSimpleVectorStore.cs: In-memory vector store implementationIRagService.cs: Interface for RAG servicesRagService.cs: Main RAG service implementation
- In-memory vector store (not persistent between sessions)
- Only handles PDF documents
- Requires local Ollama installation
- Simple text chunking without advanced techniques