{"id":3668885,"date":"2026-03-24T08:00:37","date_gmt":"2026-03-24T12:00:37","guid":{"rendered":"https:\/\/spin.atomicobject.com\/?p=3668885"},"modified":"2026-03-23T15:28:58","modified_gmt":"2026-03-23T19:28:58","slug":"rag-app-beginners-tutorial","status":"publish","type":"post","link":"https:\/\/spin.atomicobject.com\/rag-app-beginners-tutorial\/","title":{"rendered":"How to Build a RAG App, for Beginners: Local LLMs, Ollama, and LangChain"},"content":{"rendered":"<p>This tutorial is for developers, designers who code, or anyone new to AI who wants a hands-on introduction to building a custom AI chatbot that can search and answer questions using your own data.<\/p>\n<hr \/>\n<p>I wanted to build an AI-powered tool for our team, but I had zero experience building AI applications\u2014so I decided to figure it out. While researching, I found this <a href=\"https:\/\/www.youtube.com\/watch?v=HRvyei7vFSM\"><strong>YouTube video<\/strong><\/a> by Santiago Valdarrama on building a Retrieval-Augmented Generation (RAG) system with LangChain, and it turned out to be a great starting point.<\/p>\n<p>Instead of just following along, I broke everything down step by step, adding explanations and extra context to help me understand what was happening. This walkthrough is my way of organizing what I learned\u2014and hopefully making it easier for anyone else figuring this out for the first time!<\/p>\n<p>Fair warning\u2014this is long. Like, <em>really<\/em> long. If you\u2019re looking for a quicker, more to-the-point version, go watch the video\u2014it\u2019s a great, concise, and easy to follow along. But if you\u2019re like me and want to really understand what\u2019s happening under the hood, then roll up your sleeves.<\/p>\n<p><strong>This tutorial will walk through how to build a simple LangChain app that:<\/strong><\/p>\n<ol>\n<li>Loads a PDF document.<\/li>\n<li>Splits it into pages.<\/li>\n<li>Converts each page into <strong>embeddings<\/strong>.<\/li>\n<li>Uses a <strong>retriever<\/strong> to fetch the most relevant pages based on a question.<\/li>\n<li>Invokes a <strong>chain<\/strong> (with a prompt template and parser) to answer the question.<\/li>\n<\/ol>\n<p><strong>In this tutorial, you will learn how to:<\/strong><\/p>\n<ul>\n<li>Install and run an open\u2010source local LLM using Ollama.<\/li>\n<li>Switch between GPT models (via an API) and local LLMs without changing your application code.<\/li>\n<li>Use LangChain to build chains, incorporate custom prompt templates, and retrieve relevant documents.<\/li>\n<li>Build a simple retrieval-augmented generation (RAG) system to answer questions from a PDF.<\/li>\n<li>And, if you\u2019re not already familiar, you\u2019ll learn what some of these words mean.<\/li>\n<\/ul>\n<hr \/>\n<h2>Part One: Project Setup in VS Code<\/h2>\n<p>In this section, we\u2019ll set up our development environment to run a <strong>local large language model (LLM)<\/strong>. By the end, you\u2019ll have a working <strong>Jupyter Notebook<\/strong> where you can seamlessly switch between <strong>OpenAI\u2019s API<\/strong> and a locally running model like <strong>LLaMA 2 or Mixtral<\/strong>.<\/p>\n<p><strong>Already familiar with this setup?<\/strong> You might want to skip ahead and check out <strong>Santiago Valdarrama\u2019s <a href=\"https:\/\/github.com\/\">GitHub project<\/a> (llm)<\/strong>, which accompanies his YouTube tutorial on <strong>building a Retrieval-Augmented Generation (RAG) system with LangChain<\/strong>. Otherwise, let\u2019s get started!<\/p>\n<hr \/>\n<p>This tutorial assumes you have a basic familiarity with programming concepts, command-line usage, and Visual Studio Code (VS Code). It also uses Jupyter notebooks inside VS Code.<\/p>\n<p><strong>\u26d4 Dependencies:<\/strong> Here\u2019s what you\u2019ll need to get up and running:<\/p>\n<ul>\n<li>Visual Studio Code<\/li>\n<li>Jupyter (Extension for VisualStudio Code)<\/li>\n<li>Python Plugin<\/li>\n<\/ul>\n<h3>1. Installing Ollama and Downloading Models<\/h3>\n<p>Ollama is a lightweight tool that acts as a wrapper around several open-source LLMs (like Llama 2, Mixtral, etc.) to run them via a common interface.<\/p>\n<p><strong>Download and Install Ollama:<\/strong><\/p>\n<ul>\n<li>Visit <a href=\"https:\/\/ollama.com\/\">https:\/\/ollama.com\/<\/a> (or the appropriate download page) and download the version for your operating system (Mac, Linux, or Windows).<\/li>\n<li>Follow the installation instructions. The first time you run it, it may prompt you to install command-line tools or download a model (e.g., Llama 2).<\/li>\n<li>Using the Command Line (Terminal on Mac):<\/li>\n<\/ul>\n<pre><code class=\"language-bash\"># List available commands by typing:\r\nollama --help --help\r\n\r\n# Install a model (for example, Llama 2) by running:\r\nollama pull llama2\r\n\r\n# Verify your installed models:\r\nollama list\r\n\r\n#To start serving a model locally, run:\r\nollama run llama2\r\n<\/code><\/pre>\n<p>You can now interact with your model through the command line. Try typing a prompt like \u201ctell me a joke\u201d. For other models, view the list here: <a href=\"https:\/\/ollama.com\/search\">https:\/\/ollama.com\/search<\/a><\/p>\n<h3><strong>2. Creating a Project Directory:<\/strong><\/h3>\n<p>First we need to set up a project workspace in VS Code, ensuring there is a clean and isolated environment to work with. We&#8217;ll create a dedicated directory, set up a Jupyter Notebook, and configure a virtual environment along with environment variables for sensitive data like your API keys.<\/p>\n<p>Create a new directory (e.g., local-model) and open it in VS Code. You can do this manually or via the command line: <code>dev mkdir local-model<\/code><\/p>\n<h3><strong>3. Set Up a Jupyter Notebook:<\/strong><\/h3>\n<ul>\n<li>Create a new Jupyter Notebook (e.g., <code>notebook.ipynb<\/code>) in your project.<\/li>\n<li>If you haven\u2019t already, install the Jupyter and Python extensions for VS Code.<\/li>\n<\/ul>\n<h3><strong>4. Creating a Virtual Environment:<\/strong><\/h3>\n<p>We\u2019re using a virtual environment to keep dependencies isolated and ensure that installing packages doesn\u2019t interfere with other projects.<\/p>\n<ul>\n<li>Open the terminal in VS Code and run: <code>python3 -m venv .venv<\/code><\/li>\n<li>Activate the virtual environment: <code>source .venv\/bin\/activate<\/code><\/li>\n<\/ul>\n<h3>5. Making Sure It All Runs:<\/h3>\n<p>Now, let\u2019s confirm that everything is set up correctly. Open your Jupyter Notebook and run the following simple Python command to check if your environment is working properly. Add <code>print(\"Hello World\")<\/code> to your file and Run. When this runs, you\u2019ll be prompted to select a kernel. Select Python Environment, then select the Python .venv<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-3668899 aligncenter\" src=\"https:\/\/spin.atomicobject.com\/wp-content\/uploads\/Screenshot-2025-03-04-at-1.46.29\u202fPM.png\" alt=\"\" width=\"1306\" height=\"462\" srcset=\"https:\/\/spin.atomicobject.com\/wp-content\/uploads\/Screenshot-2025-03-04-at-1.46.29\u202fPM.png 1306w, https:\/\/spin.atomicobject.com\/wp-content\/uploads\/Screenshot-2025-03-04-at-1.46.29\u202fPM-590x209.png 590w, https:\/\/spin.atomicobject.com\/wp-content\/uploads\/Screenshot-2025-03-04-at-1.46.29\u202fPM-1024x362.png 1024w, https:\/\/spin.atomicobject.com\/wp-content\/uploads\/Screenshot-2025-03-04-at-1.46.29\u202fPM-150x53.png 150w, https:\/\/spin.atomicobject.com\/wp-content\/uploads\/Screenshot-2025-03-04-at-1.46.29\u202fPM-768x272.png 768w, https:\/\/spin.atomicobject.com\/wp-content\/uploads\/Screenshot-2025-03-04-at-1.46.29\u202fPM-600x212.png 600w, https:\/\/spin.atomicobject.com\/wp-content\/uploads\/Screenshot-2025-03-04-at-1.46.29\u202fPM-1200x425.png 1200w\" sizes=\"auto, (max-width: 1306px) 100vw, 1306px\" \/><\/p>\n<p>Then run your file:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-3668900 aligncenter\" src=\"https:\/\/spin.atomicobject.com\/wp-content\/uploads\/Screenshot-2025-03-04-at-1.49.40\u202fPM.png\" alt=\"\" width=\"932\" height=\"318\" srcset=\"https:\/\/spin.atomicobject.com\/wp-content\/uploads\/Screenshot-2025-03-04-at-1.49.40\u202fPM.png 932w, https:\/\/spin.atomicobject.com\/wp-content\/uploads\/Screenshot-2025-03-04-at-1.49.40\u202fPM-590x201.png 590w, https:\/\/spin.atomicobject.com\/wp-content\/uploads\/Screenshot-2025-03-04-at-1.49.40\u202fPM-150x51.png 150w, https:\/\/spin.atomicobject.com\/wp-content\/uploads\/Screenshot-2025-03-04-at-1.49.40\u202fPM-768x262.png 768w, https:\/\/spin.atomicobject.com\/wp-content\/uploads\/Screenshot-2025-03-04-at-1.49.40\u202fPM-600x205.png 600w\" sizes=\"auto, (max-width: 932px) 100vw, 932px\" \/><\/p>\n<h3>6. Setting Up Libraries &amp; Environment Variables:<\/h3>\n<p>Create an environment file(e.g., <code>.env<\/code>) inside of your folder to store your API keys and other configuration. Get your OpenAI API key and store it in the variable.<\/p>\n<pre><code>OPENAI_API_KEY=your_openai_key_here\r\n<\/code><\/pre>\n<p>In your notebook, load these variables using Python\u2019s <code>os<\/code> library or a dedicated library (like <code>dotenv<\/code>).<\/p>\n<pre><code class=\"language-jsx\">#################################################################################\r\n### IMPORT LIBRARIES\r\n#################################################################################\r\nimport os \r\n\r\n# Library that reads environment variables in the .env files\r\nfrom dotenv import load_dotenv \r\nload_dotenv()\r\n\r\n# print(\"Hello\")\r\n\r\n#################################################################################\r\n### IMPORT - Load environment variables\r\n#################################################################################\r\n\r\n# Get the OpenAI Key\r\n# Why do we need to use an openai api when we are running models locally?\r\n# We want to test everything we are doing locally with openai\/gpt to see how they compare. \r\nOPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\")\r\n\r\n#################################################################################\r\n### DEFINE MODELS\r\n#################################################################################\r\nMODEL = \"gpt-3.5-turbo\" #Chat GPT Model\r\n# MODEL = \"mixtral;8x7b\" #Locally running open source model\r\n# MODEL = \"llama2\"\r\n<\/code><\/pre>\n<p>In order to read environment variables, install Python dotenv in the terminal by running <code>pip install python_dotenv<\/code><\/p>\n<p>\u2754<strong>Why use a local LLM?<\/strong><\/p>\n<p>Before diving into the code, here are some reasons why you might want to run a local LLM:<\/p>\n<ul>\n<li><strong>Cost Efficiency:<\/strong> Open-source models can be significantly cheaper than using external APIs.<\/li>\n<li><strong>Privacy:<\/strong> Keeping everything in-house avoids sending data to third-party APIs.<\/li>\n<li><strong>Offline Usage:<\/strong> Local models are ideal for edge devices, robotics, or environments with no internet connectivity.<\/li>\n<li><strong>Backup:<\/strong> They can serve as a backup if the external API is unavailable.<\/li>\n<\/ul>\n<p><strong>At this point, we have:<\/strong><\/p>\n<ul>\n<li>A locally running AI model using <strong>Ollama<\/strong>.<\/li>\n<li>A working <strong>Jupyter Notebook<\/strong> environment.<\/li>\n<li>A virtual environment to manage dependencies.<\/li>\n<li>The ability to <strong>switch between OpenAI\u2019s API and a local model<\/strong> like LLaMA 2.<\/li>\n<\/ul>\n<hr \/>\n<h3>Did You Know That\u2026<\/h3>\n<p>Before we move on to the next step, let\u2019s review some vocabulary.<\/p>\n<ul>\n<li><strong>Python &#8211;<\/strong> Widely used in artificial intelligence and machine learning due to its large ecosystem of libraries (such as TensorFlow, PyTorch, and scikit-learn), ease of prototyping, and strong community support. For beginners, Python is a great starting point because it allows you to focus on learning AI concepts without getting bogged down in complex syntax. Most AI tutorials, courses, and research papers use Python, making it easier to find resources, examples, and help as you learn.<\/li>\n<li><strong>Jupyter Notebook<\/strong> \u2013 An interactive development environment (IDE) that runs Python in structured cells.<\/li>\n<li><strong>Large Language Model (LLM)<\/strong> \u2013 An AI model trained to process and generate human-like text.<\/li>\n<li><strong>Local LLM<\/strong> &#8211; A language model (like LLaMA 2 or Mixtral) that runs on your local machine instead of an API-based model.<\/li>\n<li><strong>Ollama<\/strong> \u2013 A tool that simplifies running open-source large language models (LLMs) on your local machine.<\/li>\n<li><strong>Llama2<\/strong> \u2013 An open-source LLM that can be run locally with Ollama.<\/li>\n<li><strong>API Key<\/strong> \u2013 A unique credential used to authenticate access to external services (like OpenAI&#8217;s API).<\/li>\n<\/ul>\n<hr \/>\n<h2>Part Two: Setting Up LangChain<\/h2>\n<h3>What is LangChain?<\/h3>\n<p><strong>LangChain<\/strong> is an open-source framework designed to help developers build applications powered by Large Language Models (LLMs) like GPT, Llama, and Claude. In <strong>LangChain<\/strong>, a <strong>Chain<\/strong> is a structured sequence of operations that process inputs (e.g., user queries) through one or more steps before producing an output. Chains allow you to <strong>combine multiple components<\/strong>, such as <strong>LLMs, retrieval systems, APIs, and logic<\/strong>, into a pipeline.<\/p>\n<p><a href=\"https:\/\/www.youtube.com\/watch?v=1bUy-1hGZpI\">https:\/\/www.youtube.com\/watch?v=1bUy-1hGZpI<\/a><\/p>\n<p><a href=\"http:\/\/youtube.com\/watch?v=cDn7bf84LsM\">http:\/\/youtube.com\/watch?v=cDn7bf84LsM<\/a><\/p>\n<h3>Setting Up Langchain<\/h3>\n<p>First, install LangChain in the terminal so we can use it.<\/p>\n<pre><code class=\"language-java\">pip install langchain_openai\r\npip install langchain\r\npip install langchain_community\r\n<\/code><\/pre>\n<p>Using LangChain, create a very simple model in the notebook to make sure that the API is working.<\/p>\n<pre><code class=\"language-python\">from langchain_openai.chat_models import ChatOpenAI\r\n\r\nmodel = ChatOpenAI(api_key=OPENAI_API_KEY,model=MODEL)\r\nmodel.invoke(\"Tell me a joke.\")\r\n<\/code><\/pre>\n<h3>Creating a Prompt Template and Parser<\/h3>\n<p>When working with different language models, it\u2019s important to understand how they return results:<\/p>\n<ul>\n<li><strong>Chat Models (e.g., ChatGPT Turbo): <\/strong>These models are designed for conversational interactions. Their outputs are typically wrapped in an <code>AIMessage<\/code> object (for example, <code>AIMessage(content=\"...\")<\/code>). This structure is useful when you want to keep track of conversation context, such as differentiating between user and assistant messages.<\/li>\n<li><strong>Completion Models (e.g., Llama2): <\/strong>In contrast, completion models return a plain string as their output. They are optimized for generating text completions rather than managing dialogue context.<\/li>\n<\/ul>\n<p>First, we define a function or use a conditional check to select the correct model type based on our configuration. In our code, this means checking if the model\u2019s name starts with \u201cGPT\u201d (indicating a chat model) or not (indicating a local completion model like Llama2). Depending on this check, we instantiate the model accordingly, and ensure that the parser is applied to handle differences in the output formats.<\/p>\n<p>This approach allows your chain to work seamlessly with either the GPT model or your local LLM without needing to change the downstream logic.<\/p>\n<pre><code class=\"language-python\">from langchain_openai.chat_models import ChatOpenAI\r\nfrom langchain_community.llms import Ollama\r\n\r\nif MODEL.startswith(\"gpt\"):  \r\n\tmodel = ChatOpenAI(api_key=OPENAI_API_KEY,model=MODEL)\r\nelse:\r\n\tmodel = Ollama(model=MODEL)\r\n\t\r\nmodel.invoke(\"Tell me a joke.\")\r\n<\/code><\/pre>\n<p>Because our application requires a consistent output format regardless of which model is used, we need to introduce a parser. The parser will convert the output\u2014whether it\u2019s an <code>AIMessage<\/code> or a plain string\u2014into a standardized format (a simple string) that the rest of our pipeline can process uniformly.<\/p>\n<pre><code class=\"language-python\">from langchain_core.output_parsers import StrOutputParser\r\nparser = StrOutputParser()\r\n\r\n# Create a langchain chain\r\n# Langchain sends a request to the model, and gets the output of the model\r\n# Pipe the output of the model into the input of the parser. \r\nchain = model | parser\r\n\r\nchain.invoke (\"tell me a joke\")\r\n<\/code><\/pre>\n<p>\u26d4 If you run into errors running the above, try updating langchain:<\/p>\n<p><code>pip install --upgrade langchain<\/code><\/p>\n<p>&nbsp;<\/p>\n<p><strong>Let&#8217;s recap. By this point:<\/strong><\/p>\n<ul>\n<li>We know how to run a model<\/li>\n<li>We can run a model locally<\/li>\n<li>We know how to create a LangChain chain<\/li>\n<\/ul>\n<hr \/>\n<h3>Did You Know That\u2026<\/h3>\n<p>Before we move on to the next step, let\u2019s review some vocabulary.<\/p>\n<ul>\n<li data-start=\"128\" data-end=\"145\"><strong data-start=\"132\" data-end=\"145\">LangChain &#8211; <\/strong>LangChain is an open-source framework for building applications powered by large language models (LLMs) like GPT, Llama, and Claude. It lets you combine things like LLMs, APIs, and logic into <em data-start=\"338\" data-end=\"346\">chains<\/em>\u2014structured pipelines that take in a prompt, process it, and return a response. It\u2019s designed to make working with LLMs more modular and flexible.<\/li>\n<li data-start=\"128\" data-end=\"145\"><strong data-start=\"503\" data-end=\"522\">Prompt Template &#8211; <\/strong>A prompt template is a reusable format for structuring the input you send to a language model. It can include placeholders (like <code data-start=\"652\" data-end=\"664\">{question}<\/code>) that get filled in at runtime. This ensures consistency and allows you to customize prompts without rewriting them every time.<\/li>\n<li data-start=\"128\" data-end=\"145\"><strong data-start=\"803\" data-end=\"813\">Invoke &#8211; <\/strong><code data-start=\"814\" data-end=\"824\">invoke()<\/code> is a function used to call a language model and get a response. In LangChain, you use <code data-start=\"911\" data-end=\"939\">model.invoke(\"your input\")<\/code> to send a message to the model and receive the output. It\u2019s a simple way to run the chain and see what the model returns.<\/li>\n<li data-start=\"128\" data-end=\"145\"><strong data-start=\"1072\" data-end=\"1085\">AIMessage &#8211; <\/strong>An <code data-start=\"1089\" data-end=\"1100\">AIMessage<\/code> is a special type of output object returned by chat-based models like GPT. It helps keep track of the model\u2019s response in a conversation. Instead of just returning a string, it wraps the text in an object with metadata\u2014like who said it and when. You can extract the actual text using <code data-start=\"1385\" data-end=\"1395\">.content<\/code>.<\/li>\n<\/ul>\n<hr \/>\n<p>&nbsp;<\/p>\n<h2>Part Three: Building a Simple RAG System<\/h2>\n<p>Next, we\u2019ll build a simple <strong>RAG (Retrieval-Augmented Generation) system<\/strong> that retrieves information from a <strong>PDF<\/strong> and uses it to answer questions.<\/p>\n<h3>What is a RAG system?<\/h3>\n<p>A RAG (Retrieval-Augmented Generation) system is an AI framework that enhances text generation by retrieving knowledge from an external source before generating a response.<\/p>\n<p>It combines the strengths of:<\/p>\n<p>\u2705 <strong>Retrieval-based models<\/strong> \u2192 Finds relevant information from a database.<\/p>\n<p>\u2705 <strong>Generation-based models<\/strong> \u2192 Uses an LLM (Large Language Model) to generate an answer.<\/p>\n<p>This approach <strong>improves accuracy<\/strong> and <strong>reduces hallucinations<\/strong> in AI responses.<\/p>\n<h3>Installing PyPDF<\/h3>\n<p><strong>PyPDF<\/strong> is a Python library for reading, manipulating, and extracting data from PDFs. <strong>However, it is NOT a document loader.<\/strong> Instead, <strong>LangChain\u2019s <code>PyPDFLoader<\/code><\/strong> builds on top of PyPDF to integrate it into AI-powered workflows.<\/p>\n<ul>\n<li>First, find a multi-page PDF to use for this experiment. Drop it into your project folder.<\/li>\n<li><strong>Next, install pypdf: <code>pip install pypdf<\/code><\/strong><\/li>\n<\/ul>\n<h3>Setting Up a Document Loader<\/h3>\n<p>A <strong>Document Loader<\/strong> is a component that processes documents from various sources (e.g., PDFs, text files, web pages, databases) and converts them into a structured format for retrieval. It will allow us to load our PDF into our application.<\/p>\n<p>LangChain provides a document loader called <code>PyPDFLoader<\/code>, which is built on top of PyPDF to facilitate PDF text extraction for AI applications.<\/p>\n<p>Now, let&#8217;s <strong>load a PDF and extract text using LangChain&#8217;s <code>PyPDFLoader<\/code><\/strong>:<\/p>\n<pre><code class=\"language-python\">from langchain_community.document_loaders import PyPDFLoader\r\n\r\n# **Stores the file reference** (but doesn\u2019t load it yet).\r\nloader = PyPDFLoader(\"2025-why-work-with-atomic.pdf\")\r\n\r\n# load_and_split() loads the PDF and splits it into chunks or pages.\r\n# 'pages' stores the extracted text in memory as a list of Document objects.\r\npages = loader.load_and_split() \r\n\r\n# Print the extracted pages as Document objects\r\npages \r\n<\/code><\/pre>\n<p>When you run this code, you should see a list of pages from your PDF that look something like this:<\/p>\n<p>[ADD SCREENSHOT OF OUTPUT]<\/p>\n<p><strong>How <code>PyPDFLoader<\/code> Uses PyPDF in LangChain<\/strong><\/p>\n<p>LangChain&#8217;s <strong><code>PyPDFLoader<\/code><\/strong> uses <strong>PyPDF<\/strong> internally to read and extract text from PDFs, making it easier to integrate PDFs into <strong>AI chatbots, search engines, and RAG systems<\/strong>.<\/p>\n<p><strong>What is <code>load_and_split()<\/code> doing?<\/strong><\/p>\n<ul>\n<li>This method <strong>loads<\/strong> the PDF file and <strong>splits it into smaller text chunks<\/strong> (usually page-by-page or based on a chunking strategy).<\/li>\n<li>It prepares the data for retrieval-based AI models.<\/li>\n<\/ul>\n<p><strong>Understanding the <code>pages<\/code> variable<\/strong><\/p>\n<ul>\n<li>The <code>pages<\/code> variable stores a list of chunks, or <code>Document<\/code> objects, into memory.<\/li>\n<li>A <code>Document<\/code> is a LangChain object. Each <code>Document<\/code> object contains:\n<ul>\n<li><code>page_content<\/code>: The extracted text.<\/li>\n<li><code>metadata<\/code>: Information like <strong>page number, file name, and source<\/strong>.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3>Common Operations on <code>pages<\/code><\/h3>\n<table>\n<thead>\n<tr>\n<th>Action<\/th>\n<th>Effect<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><code>pages<\/code><\/td>\n<td>Prints a list of <code>Document<\/code> objects (one per page\/chunk)<\/td>\n<\/tr>\n<tr>\n<td><code>pages[0]<\/code><\/td>\n<td>Prints the first <code>Document<\/code> object<\/td>\n<\/tr>\n<tr>\n<td><code>pages[0].page_content<\/code><\/td>\n<td>Prints the text from the first page<\/td>\n<\/tr>\n<tr>\n<td><code>pages[0].metadata<\/code><\/td>\n<td>Prints metadata (e.g., page number)<\/td>\n<\/tr>\n<tr>\n<td><code>len(pages)<\/code><\/td>\n<td>Prints the total number of pages extracted<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3>Create a Prompt Template<\/h3>\n<p>Before our app can start answering questions, we need to give it some clear instructions. Basically, we want to tell the model to only answer based on the specific information we provide\u2014nothing from its built-in knowledge, nothing from what it learned before. Just stick to the context we give it and don\u2019t make stuff up. If the answer isn\u2019t in the provided information, it should just say so instead of guessing.<\/p>\n<p>By default, AI models (like GPT-4) use <strong>both<\/strong> pre-trained knowledge <strong>and<\/strong> the input they receive. However, in our <strong>RAG system<\/strong>, we want the AI to <strong>only use the retrieved PDF data<\/strong> to answer questions accurately.<\/p>\n<pre><code class=\"language-python\">from langchain.prompts import PromptTemplate\r\n\r\n# Define a custom prompt template\r\ncustom_template=\"\"\"\r\nYou are AI assistant instructed to answer questions about Atomic Object. \r\n- Answer all questions based STRICTLY on the provided context. \r\n- ONLY use the above context to answer the question.\r\n- DO NOT use prior knowledge or external sources.\r\n- If the answer is not found in the context, say: \"I couldn't find the answer in the provided text.\"\r\n\r\nContext:\r\n{context}\r\n\r\nQuestion:\r\n{question}\t\r\n\"\"\"\r\n\r\ncontext=\"The company was founded in 2020 and specializes in AI research.\"\r\nquestion=\"When was the company founded?\"\r\n\r\n# Fill in the template\r\nprompt = PromptTemplate.from_template(custom_template)\r\nprompt.format(context=context, question=question)\r\n<\/code><\/pre>\n<h3>Pass the prompt template back into our chain.<\/h3>\n<p>Let\u2019s test it out.<\/p>\n<pre><code class=\"language-python\">chain = prompt | model | parser\r\n\r\nchain.invoke(\r\n    {\r\n        \"context\": \"My name is Alecia\",\r\n        \"question\": \"What is my name?\"\r\n    }\r\n)\r\n<\/code><\/pre>\n<p>When running this, the model should follow the instructions in the template (the <code>prompt<\/code> part of our chain) and answer the question based upon the context.<\/p>\n<p>When we invoke our chain in our application, it may be helpful to understand what the input of the chain looks like. Adding <code>chain.input_schema.schema()<\/code> will show you the schema of chain and what inputs it is expecting.<\/p>\n<p><strong>Let\u2019s Recap. By this point:<\/strong><\/p>\n<ul>\n<li>We have a chain that has a prompt, model, and parser.<\/li>\n<li>We have our PDF document pages loaded into memory<\/li>\n<\/ul>\n<hr \/>\n<h3>Did You Know That\u2026<\/h3>\n<p>Before we move on to the next step, let\u2019s review some vocabulary.<\/p>\n<ul>\n<li data-start=\"241\" data-end=\"477\"><strong data-start=\"241\" data-end=\"281\">RAG (Retrieval-Augmented Generation) &#8211; <\/strong>An approach that combines document search (retrieval) with language generation to improve answer accuracy. It pulls relevant info from external sources (like PDFs) before generating a response.<\/li>\n<li data-start=\"484\" data-end=\"624\"><strong data-start=\"484\" data-end=\"509\">Retrieval-Based Model &#8211; <\/strong>Part of a RAG system that searches your documents to find the most relevant chunks based on the user\u2019s question.<\/li>\n<li data-start=\"631\" data-end=\"767\"><strong data-start=\"631\" data-end=\"657\">Generation-Based Model &#8211; <\/strong>The large language model (like GPT or LLaMA) that takes in context and generates a natural-language answer.<\/li>\n<li data-start=\"774\" data-end=\"899\"><strong data-start=\"774\" data-end=\"783\">PyPDF &#8211; <\/strong>A Python library used to read and extract text from PDF files. LangChain builds on it for AI document processing.<\/li>\n<li data-start=\"906\" data-end=\"1054\"><strong data-start=\"906\" data-end=\"921\">PyPDFLoader &#8211; <\/strong>A document loader from LangChain that uses PyPDF under the hood. It loads PDF content and turns it into chunks that an AI can use.<\/li>\n<li data-start=\"1061\" data-end=\"1239\"><strong data-start=\"1061\" data-end=\"1080\">Document Loader &#8211; <\/strong>A component in LangChain that pulls in data from files, URLs, or databases and turns it into a structured format (like <code data-start=\"1202\" data-end=\"1212\">Document<\/code> objects) for AI workflows.<\/li>\n<li data-start=\"1246\" data-end=\"1416\"><strong data-start=\"1246\" data-end=\"1266\">load_and_split() &#8211; <\/strong>A method that loads a document (like a PDF) and splits it into smaller parts\u2014typically pages\u2014so they can be searched or retrieved more efficiently.<\/li>\n<li data-start=\"1423\" data-end=\"1525\"><strong data-start=\"1423\" data-end=\"1442\">Document Object &#8211; <\/strong>A LangChain object that represents a single chunk of content. Each one includes <code data-start=\"1528\" data-end=\"1542\">page_content<\/code> or the actual text, and <code data-start=\"1562\" data-end=\"1572\">metadata<\/code> info like page number or source.<\/li>\n<\/ul>\n<hr \/>\n<h2>Wrapping Up<\/h2>\n<p data-start=\"189\" data-end=\"485\">If you made it this far, you\u2019ve already built the foundation of a Retrieval-Augmented Generation (RAG) application! Starting from scratch, we:<\/p>\n<ul>\n<li data-start=\"189\" data-end=\"485\">set up a development environment<\/li>\n<li data-start=\"189\" data-end=\"485\">ran a local large language model with Ollama<\/li>\n<li data-start=\"189\" data-end=\"485\">and connected everything through LangChain to create a simple AI workflow.<\/li>\n<\/ul>\n<p data-start=\"487\" data-end=\"917\">Along the way, we explored how to switch between API-based models like GPT and locally running models such as Llama2, how LangChain chains structure interactions with language models, and how prompt templates and parsers help standardize outputs. We also walked through the basics of a RAG system by loading a PDF, breaking it into retrievable chunks, and instructing the model to answer questions using only the provided context.<\/p>\n<p data-start=\"919\" data-end=\"1219\">While this tutorial focuses on the <strong>core concepts and setup<\/strong>, it\u2019s really just the beginning. The full project can be extended in many ways\u2014adding embeddings and vector databases for smarter retrieval, building a user interface, connecting multiple data sources, or deploying the system as a real tool.<\/p>\n<p data-start=\"1221\" data-end=\"1574\">If you\u2019d like to keep building on what we started here, I highly recommend continuing with the original <a href=\"https:\/\/www.youtube.com\/watch?v=HRvyei7vFSM\"><strong>YouTube video<\/strong><\/a> by Santiago Valdarrama. His video walks through the remaining steps of the project and provides a great visual guide for expanding the RAG system further. You can follow along with the video using the setup and explanations we covered here.<\/p>\n<p data-start=\"1576\" data-end=\"1908\" data-is-last-node=\"\" data-is-only-node=\"\">The important takeaway is that building AI applications doesn\u2019t require deep machine learning expertise to get started. With tools like Ollama, LangChain, and open-source models, it\u2019s possible to experiment, learn, and build useful AI-powered tools step by step\u2014and I hope this guide helped make that first step a little clearer.<\/p>\n<p><!-- notionvc: 2a966963-24a6-4324-b408-a0cfeba261c0 --><\/p>\n<p><!-- notionvc: a08eb308-6f93-4a55-a50d-38706d120e5a --><\/p>\n<p><!-- notionvc: e898d155-466d-4146-a8d6-2538a5f96f60 --><\/p>\n","protected":false},"excerpt":{"rendered":"<p>This tutorial is for developers, designers who code, or anyone new to AI who wants a hands-on introduction to building a custom AI chatbot that can search and answer questions using your own data. I wanted to build an AI-powered tool for our team, but I had zero experience building AI applications\u2014so I decided to [&hellip;]<\/p>\n","protected":false},"author":612,"featured_media":3672859,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[23741],"tags":[23484,23490,23522,23747,23748,23749],"series":[],"class_list":["post-3668885","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-developers","tag-chatgpt","tag-openai","tag-llm","tag-rag","tag-langchain","tag-ollama"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>How to Build a RAG App for Beginners Using LangChain<\/title>\n<meta name=\"description\" content=\"Here&#039;s a hands-on guide to building a custom app using LangChain, a RAG system, and local LLMs with Ollama \u2014 perfect for beginners.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/spin.atomicobject.com\/rag-app-beginners-tutorial\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How to Build a RAG App for Beginners Using LangChain\" \/>\n<meta property=\"og:description\" content=\"Here&#039;s a hands-on guide to building a custom app using LangChain, a RAG system, and local LLMs with Ollama \u2014 perfect for beginners.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/spin.atomicobject.com\/rag-app-beginners-tutorial\/\" \/>\n<meta property=\"og:site_name\" content=\"Atomic Spin\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/atomicobject\" \/>\n<meta property=\"article:published_time\" content=\"2026-03-24T12:00:37+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/spin.atomicobject.com\/wp-content\/uploads\/embedding-models-scaled.png\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1305\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Alecia Frederick\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@atomicobject\" \/>\n<meta name=\"twitter:site\" content=\"@atomicobject\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Alecia Frederick\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"13 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/spin.atomicobject.com\\\/rag-app-beginners-tutorial\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/spin.atomicobject.com\\\/rag-app-beginners-tutorial\\\/\"},\"author\":{\"name\":\"Alecia Frederick\",\"@id\":\"https:\\\/\\\/spin.atomicobject.com\\\/#\\\/schema\\\/person\\\/03160a5021be6671a47f06b7cb43d4b0\"},\"headline\":\"How to Build a RAG App, for Beginners: Local LLMs, Ollama, and LangChain\",\"datePublished\":\"2026-03-24T12:00:37+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/spin.atomicobject.com\\\/rag-app-beginners-tutorial\\\/\"},\"wordCount\":2876,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/atomicobject.com\\\/\"},\"image\":{\"@id\":\"https:\\\/\\\/spin.atomicobject.com\\\/rag-app-beginners-tutorial\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/spin.atomicobject.com\\\/wp-content\\\/uploads\\\/embedding-models-scaled.png\",\"keywords\":[\"chatgpt\",\"openai\",\"LLM\",\"RAG\",\"Langchain\",\"Ollama\"],\"articleSection\":[\"AI for Developers\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/spin.atomicobject.com\\\/rag-app-beginners-tutorial\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/spin.atomicobject.com\\\/rag-app-beginners-tutorial\\\/\",\"url\":\"https:\\\/\\\/spin.atomicobject.com\\\/rag-app-beginners-tutorial\\\/\",\"name\":\"How to Build a RAG App for Beginners Using LangChain\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/spin.atomicobject.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/spin.atomicobject.com\\\/rag-app-beginners-tutorial\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/spin.atomicobject.com\\\/rag-app-beginners-tutorial\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/spin.atomicobject.com\\\/wp-content\\\/uploads\\\/embedding-models-scaled.png\",\"datePublished\":\"2026-03-24T12:00:37+00:00\",\"description\":\"Here's a hands-on guide to building a custom app using LangChain, a RAG system, and local LLMs with Ollama \u2014 perfect for beginners.\",\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/spin.atomicobject.com\\\/rag-app-beginners-tutorial\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/spin.atomicobject.com\\\/rag-app-beginners-tutorial\\\/#primaryimage\",\"url\":\"https:\\\/\\\/spin.atomicobject.com\\\/wp-content\\\/uploads\\\/embedding-models-scaled.png\",\"contentUrl\":\"https:\\\/\\\/spin.atomicobject.com\\\/wp-content\\\/uploads\\\/embedding-models-scaled.png\",\"width\":2560,\"height\":1305,\"caption\":\"Pictured is a cartoon animal doing paperwork illustrating a post titled: Build a RAG App For Beginners: Local LLMs, Ollama, and LangChain\"},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/spin.atomicobject.com\\\/#website\",\"url\":\"https:\\\/\\\/spin.atomicobject.com\\\/\",\"name\":\"Atomic Spin\",\"description\":\"Atomic Object\u2019s blog on everything we find fascinating.\",\"publisher\":{\"@id\":\"https:\\\/\\\/atomicobject.com\\\/\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/spin.atomicobject.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/spin.atomicobject.com\\\/#organization\",\"name\":\"Atomic Object\",\"url\":\"https:\\\/\\\/spin.atomicobject.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/spin.atomicobject.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/spin.atomicobject.com\\\/wp-content\\\/uploads\\\/AO-Logo-Emblem-Color.png\",\"contentUrl\":\"https:\\\/\\\/spin.atomicobject.com\\\/wp-content\\\/uploads\\\/AO-Logo-Emblem-Color.png\",\"width\":258,\"height\":244,\"caption\":\"Atomic Object\"},\"image\":{\"@id\":\"https:\\\/\\\/spin.atomicobject.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/atomicobject\",\"https:\\\/\\\/x.com\\\/atomicobject\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/spin.atomicobject.com\\\/#\\\/schema\\\/person\\\/03160a5021be6671a47f06b7cb43d4b0\",\"name\":\"Alecia Frederick\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/abd8acf032cc7f44c4aa2735126cb67edee391a4fc14055d6740a355fc96afae?s=96&d=blank&r=pg\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/abd8acf032cc7f44c4aa2735126cb67edee391a4fc14055d6740a355fc96afae?s=96&d=blank&r=pg\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/abd8acf032cc7f44c4aa2735126cb67edee391a4fc14055d6740a355fc96afae?s=96&d=blank&r=pg\",\"caption\":\"Alecia Frederick\"},\"description\":\"Software Consultant and Designer at Atomic Object Grand Rapids. UX-er, design-thinker, and potter who is passionate about sculpting experiences, and the occasional mug or vase.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/aleciafrederick\\\/\"],\"url\":\"https:\\\/\\\/spin.atomicobject.com\\\/author\\\/alecia-frederick\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"How to Build a RAG App for Beginners Using LangChain","description":"Here's a hands-on guide to building a custom app using LangChain, a RAG system, and local LLMs with Ollama \u2014 perfect for beginners.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/spin.atomicobject.com\/rag-app-beginners-tutorial\/","og_locale":"en_US","og_type":"article","og_title":"How to Build a RAG App for Beginners Using LangChain","og_description":"Here's a hands-on guide to building a custom app using LangChain, a RAG system, and local LLMs with Ollama \u2014 perfect for beginners.","og_url":"https:\/\/spin.atomicobject.com\/rag-app-beginners-tutorial\/","og_site_name":"Atomic Spin","article_publisher":"https:\/\/www.facebook.com\/atomicobject","article_published_time":"2026-03-24T12:00:37+00:00","og_image":[{"width":2560,"height":1305,"url":"https:\/\/spin.atomicobject.com\/wp-content\/uploads\/embedding-models-scaled.png","type":"image\/png"}],"author":"Alecia Frederick","twitter_card":"summary_large_image","twitter_creator":"@atomicobject","twitter_site":"@atomicobject","twitter_misc":{"Written by":"Alecia Frederick","Est. reading time":"13 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/spin.atomicobject.com\/rag-app-beginners-tutorial\/#article","isPartOf":{"@id":"https:\/\/spin.atomicobject.com\/rag-app-beginners-tutorial\/"},"author":{"name":"Alecia Frederick","@id":"https:\/\/spin.atomicobject.com\/#\/schema\/person\/03160a5021be6671a47f06b7cb43d4b0"},"headline":"How to Build a RAG App, for Beginners: Local LLMs, Ollama, and LangChain","datePublished":"2026-03-24T12:00:37+00:00","mainEntityOfPage":{"@id":"https:\/\/spin.atomicobject.com\/rag-app-beginners-tutorial\/"},"wordCount":2876,"commentCount":0,"publisher":{"@id":"https:\/\/atomicobject.com\/"},"image":{"@id":"https:\/\/spin.atomicobject.com\/rag-app-beginners-tutorial\/#primaryimage"},"thumbnailUrl":"https:\/\/spin.atomicobject.com\/wp-content\/uploads\/embedding-models-scaled.png","keywords":["chatgpt","openai","LLM","RAG","Langchain","Ollama"],"articleSection":["AI for Developers"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/spin.atomicobject.com\/rag-app-beginners-tutorial\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/spin.atomicobject.com\/rag-app-beginners-tutorial\/","url":"https:\/\/spin.atomicobject.com\/rag-app-beginners-tutorial\/","name":"How to Build a RAG App for Beginners Using LangChain","isPartOf":{"@id":"https:\/\/spin.atomicobject.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/spin.atomicobject.com\/rag-app-beginners-tutorial\/#primaryimage"},"image":{"@id":"https:\/\/spin.atomicobject.com\/rag-app-beginners-tutorial\/#primaryimage"},"thumbnailUrl":"https:\/\/spin.atomicobject.com\/wp-content\/uploads\/embedding-models-scaled.png","datePublished":"2026-03-24T12:00:37+00:00","description":"Here's a hands-on guide to building a custom app using LangChain, a RAG system, and local LLMs with Ollama \u2014 perfect for beginners.","inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/spin.atomicobject.com\/rag-app-beginners-tutorial\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/spin.atomicobject.com\/rag-app-beginners-tutorial\/#primaryimage","url":"https:\/\/spin.atomicobject.com\/wp-content\/uploads\/embedding-models-scaled.png","contentUrl":"https:\/\/spin.atomicobject.com\/wp-content\/uploads\/embedding-models-scaled.png","width":2560,"height":1305,"caption":"Pictured is a cartoon animal doing paperwork illustrating a post titled: Build a RAG App For Beginners: Local LLMs, Ollama, and LangChain"},{"@type":"WebSite","@id":"https:\/\/spin.atomicobject.com\/#website","url":"https:\/\/spin.atomicobject.com\/","name":"Atomic Spin","description":"Atomic Object\u2019s blog on everything we find fascinating.","publisher":{"@id":"https:\/\/atomicobject.com\/"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/spin.atomicobject.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/spin.atomicobject.com\/#organization","name":"Atomic Object","url":"https:\/\/spin.atomicobject.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/spin.atomicobject.com\/#\/schema\/logo\/image\/","url":"https:\/\/spin.atomicobject.com\/wp-content\/uploads\/AO-Logo-Emblem-Color.png","contentUrl":"https:\/\/spin.atomicobject.com\/wp-content\/uploads\/AO-Logo-Emblem-Color.png","width":258,"height":244,"caption":"Atomic Object"},"image":{"@id":"https:\/\/spin.atomicobject.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/atomicobject","https:\/\/x.com\/atomicobject"]},{"@type":"Person","@id":"https:\/\/spin.atomicobject.com\/#\/schema\/person\/03160a5021be6671a47f06b7cb43d4b0","name":"Alecia Frederick","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/abd8acf032cc7f44c4aa2735126cb67edee391a4fc14055d6740a355fc96afae?s=96&d=blank&r=pg","url":"https:\/\/secure.gravatar.com\/avatar\/abd8acf032cc7f44c4aa2735126cb67edee391a4fc14055d6740a355fc96afae?s=96&d=blank&r=pg","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/abd8acf032cc7f44c4aa2735126cb67edee391a4fc14055d6740a355fc96afae?s=96&d=blank&r=pg","caption":"Alecia Frederick"},"description":"Software Consultant and Designer at Atomic Object Grand Rapids. UX-er, design-thinker, and potter who is passionate about sculpting experiences, and the occasional mug or vase.","sameAs":["https:\/\/www.linkedin.com\/in\/aleciafrederick\/"],"url":"https:\/\/spin.atomicobject.com\/author\/alecia-frederick\/"}]}},"_links":{"self":[{"href":"https:\/\/spin.atomicobject.com\/wp-json\/wp\/v2\/posts\/3668885","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/spin.atomicobject.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/spin.atomicobject.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/spin.atomicobject.com\/wp-json\/wp\/v2\/users\/612"}],"replies":[{"embeddable":true,"href":"https:\/\/spin.atomicobject.com\/wp-json\/wp\/v2\/comments?post=3668885"}],"version-history":[{"count":4,"href":"https:\/\/spin.atomicobject.com\/wp-json\/wp\/v2\/posts\/3668885\/revisions"}],"predecessor-version":[{"id":3673082,"href":"https:\/\/spin.atomicobject.com\/wp-json\/wp\/v2\/posts\/3668885\/revisions\/3673082"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/spin.atomicobject.com\/wp-json\/wp\/v2\/media\/3672859"}],"wp:attachment":[{"href":"https:\/\/spin.atomicobject.com\/wp-json\/wp\/v2\/media?parent=3668885"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/spin.atomicobject.com\/wp-json\/wp\/v2\/categories?post=3668885"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/spin.atomicobject.com\/wp-json\/wp\/v2\/tags?post=3668885"},{"taxonomy":"series","embeddable":true,"href":"https:\/\/spin.atomicobject.com\/wp-json\/wp\/v2\/series?post=3668885"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}