top of page

Context Engineering with Redis and LangChain

In this 1-hour workshop, you’ll learn how to design scalable AI agents by mastering context engineering — the discipline of structuring memory, retrieval, and reasoning workflows so LLMs behave reliably in production. The session explores how Redis and LangChain enable high-performance agent systems through semantic caching, vector search, and intelligent context management.

ROBERT SHELTON.png
  • Twitter
  • LinkedIn

Robert is a builder with a background in data science and full stack engineering. As an Applied AI Engineer at Redis, he focuses on bridging the gap between AI research and real-world applications. In open source, he helps maintain the Redis Vector Library and contributes to integrations with LangChain, LlamaIndex, and LangGraph. He has delivered workshops and consulting engagements for multiple Fortune 50 companies and has spoken at conferences including PyData and CodeMash.

Workshop Overview

Modern AI systems are shifting from prompt engineering to context engineering, where developers program how information flows into an LLM rather than relying on a single prompt. Instead of treating the model as a chatbot, this approach builds structured pipelines that assemble instructions, retrieved knowledge, memory, and tool outputs into the context window — turning stateless models into reliable, production-ready agents.
 

This workshop explores how Redis and LangChain enable scalable context architectures by combining vector search, semantic caching, and agent memory into a unified system. Attendees will learn how to design efficient context pipelines that reduce token costs, improve latency, and maintain persistent state across sessions — key capabilities for building real-world AI applications beyond demos. 

 

1. Foundations of Context Engineering

  • Moving beyond prompt engineering into programmable context pipelines

  • Understanding the context window as working memory for agents

  • Strategies for selecting, compressing, and isolating context signals


2. Memory Architectures for AI Agents

  • Short-term vs long-term memory design

  • Retrieval-augmented generation (RAG) patterns

  • Using Redis vector search and agent memory systems for persistent context 


3. High-Performance Context Pipelines

  • Semantic caching to reduce token usage and latency

  • Context pruning and summarization strategies

  • Designing for throughput, reliability, and real-time agent interactions


4. Orchestrating Context with LangChain

  • Building multi-step agent workflows

  • Managing tool outputs, conversation history, and state

  • Structuring context flows for scalable production deployments

Time and Location

March 31, 2026
10:30am - 11:30am
Cobb Galleria

Workshop Requirements

  • AI practitioners and researchers. 

  • Developers seeking to transition into advanced agent-building roles. 

  • Organizations looking to implement custom AI solutions.

bottom of page