0% found this document useful (0 votes)
49 views51 pages

Chapter 2

LangChain is an open-source framework that facilitates the development of applications using large language models (LLMs) by providing a standard interface and integrations with other tools. It supports the creation of agentic AI, which operates autonomously to make decisions and perform complex tasks, and includes core concepts like chains, agents, tools, and memory. Additionally, LangGraph enhances LangChain by enabling the design of complex workflows and stateful interactions among agents.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views51 pages

Chapter 2

LangChain is an open-source framework that facilitates the development of applications using large language models (LLMs) by providing a standard interface and integrations with other tools. It supports the creation of agentic AI, which operates autonomously to make decisions and perform complex tasks, and includes core concepts like chains, agents, tools, and memory. Additionally, LangGraph enhances LangChain by enabling the design of complex workflows and stateful interactions among agents.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 51

Introduction to

LangChain
LangChain is an open-source framework designed to simplify the creation of
applications using large language models (LLMs). It provides a standard interface for
chains, many integrations with other tools, and end-to-end chains for common
applications.
LangChain allows AI developers to develop applications based on the combined
Large Language Models (such as GPT-4) with external sources of computation and data.
This framework comes with a package for both Python and JavaScript.
• What is agentic AI?
• Imagine an AI that doesn’t just respond to prompts but actively makes decisions, gathers
information, uses tools, stores memory, and works toward a goal without needing you to
hold its hand at every step. That’s what agentic AI is all about.
• Agentic AI refers to a new class of intelligent systems where large language models (LLMs)
act autonomously using a combination of memory, tools, logic, and interaction. Rather than
generating isolated responses, these agents behave more like problem solvers, capable of
carrying out complex, multistep tasks. Some key characteristics of agentic AI are as follows:
• Autonomy: The agent takes initiative, plans, and makes decisions.
• Tool Use: It can call APIs, perform searches, trigger functions, or interact with files.
• Memory: The agent retains short-term or long-term context to maintain continuity across
steps.
• Reasoning: It can break down a task, evaluate responses, and loop through logic chains.
• Some real-world examples of agentic AI are as follows:
• Research assistants that explore, summarize, and cite sources.
• Coding copilots that can plan, write, and debug code.
• Auto-email agents that read emails, draft responses, and take follow-up
actions.
• These systems represent a shift from static prompt-response LLMs to
interactive, goal-oriented agents. To build such agents, you need
powerful frameworks like LangChain and LangGraph.
What is AI Agent Architecture?
AI agent architecture is the structural design of an autonomous agent. It determines how that
agent processes information, makes decisions, and interacts with its environment.
Agent architecture integrates sensors, processing mechanisms, and actuators to create a
structured system capable of operating in dynamic and unpredictable environments.
This is why it’s essential in applications like autonomous vehicles, surveillance systems, and AI-
driven automation.
Different architectures suit different levels of autonomy and complexity. The four main types
include:
• Reactive architectures
• Deliberative architectures
• Hybrid architectures
• Layered architectures
• Why use LangChain for agentic AI?
• Building an agent isn’t just about generating responses—it’s about
giving your AI a structure to think, decide, and act. That’s where
LangChain comes in.
• LangChain is a robust framework that helps developers define the
logic, tools, memory, and workflows an agent needs to function more
intelligently and, in a goal driven manner. Instead of prompting an
LLM in isolation, LangChain lets you design how an agent should
behave step by step, like how it chooses tools, retains memory, or
interacts with a user.
LangChain consists of the following core concepts:
• Chains: Sequences of steps that process input → transform it →
produce output.
• Agents: Dynamic systems that decide what to do next based on the
current context.
• Tools: External functions or APIs the agent can call—like a calculator,
search API, or code interpreter.
• Memory: Mechanisms store and recall past interactions or facts
across conversations.
LangChain supports different agent types, from reactive agents responding
to one task at a time, to conversational agents tracking history and
planning multistep actions. Here are some real-world use cases of
LangChain:
• Coding copilots that use online docs and run code
• Travel planners that interact with search and map APIs
• AI tutors that remember student progress and adapt lessons
• Data analyzers that ingest documents, summarize, and answer questions
LangChain provides modular building blocks for designing capable agents,
but agents still need a brain that can coordinate decisions, loop over logic,
and manage branching workflows. And that’s where LangGraph takes over.
• What is LangGraph
• Think of LangGraph as the graph engine that powers intelligent AI
workflows. While LangChain provides the building blocks for agents,
LangGraph helps you connect those blocks into complex, stateful
workflows with branching, looping, and multi-agent coordination.
Built on top of LangChain, LangGraph lets you define:
• Nodes: Individual steps or actions in your workflow, like asking a
question or calling a tool
• Edges: Paths that connect nodes and determine the flow
• States: Data and context stored across steps to maintain memory and
decision history
• Conditional transitions: Logic to decide which path to take next based
on agent outputs or external conditions
This makes LangGraph ideal for orchestrating agents that need to
perform iterative tasks, coordinate multiple agents, or manage
workflows that aren’t just linear sequences.
It also supports persistent memory and state storage, so your AI agents
can remember essential details throughout long-running processes.
What is AI Agent Architecture?
AI agent architecture is the structural design of an autonomous agent. It determines how that
agent processes information, makes decisions, and interacts with its environment.
Agent architecture integrates sensors, processing mechanisms, and actuators to create a
structured system capable of operating in dynamic and unpredictable environments.
This is why it’s essential in applications like autonomous vehicles, surveillance systems, and AI-
driven automation.
Different architectures suit different levels of autonomy and complexity. The four main types
include:
• Reactive architectures
• Deliberative architectures
• Hybrid architectures
• Layered architectures
Agent Architectures
• AI agent architecture serves as the conceptual blueprint for designing
intelligent agent systems in Artificial Intelligence. It outlines the core
components and their interactions, enabling agents to perceive,
process information, make decisions, and interact with their
environments to achieve predefined goals.
Components of AI Agent Architectures
• Profiling / Perception Module: The Eyes and Ears of the Agent

-This module acts as the agent's sensory system, gathering and


interpreting data from the environment. It processes raw sensory input
(like images, sound, text, or sensor readings) to identify relevant features
for decision-making.
• Imagine a self-driving automobile that has radar, lidar, and camera sensors. Through
the processing of data from various sensors, the profile module recognizes barriers,
traffic lights, and lane markers, allowing the vehicle to go safely.
Components of AI Agent Architectures
• Memory Module: The Repository of Knowledge

This component serves as the agent's knowledge base, storing both


short-term and long-term information. It allows the agent to recall past
experiences, rules, and patterns, facilitating learning and informed
decision-making.
• To provide timely and correct replies during discussions, a virtual assistant
chatbot, for example, uses its memory module to remember client requests,
product specs, and commonly asked questions.
Components of AI Agent Architectures
• Planning Module: The Strategist Behind Decisions

This module is responsible for the agent's decision-making and goal-oriented


behavior. It evaluates the current situation, considers alternatives, and selects
the most effective course of action to achieve the agent's goals, often
employing strategies like search algorithms or rule-based systems.

• For instance, a delivery routing agent utilizes the planning module to


optimize the most effective routes and timetables for its fleet by taking into
account variables like vehicle capacity, traffic patterns, and delivery priority.
Components of AI Agent Architectures
• Action Module: From Plans to Execution

This module translates the decisions and plans made by the planning module
into actions in the real world. It interfaces with actuators or output devices to
execute the selected actions and receives feedback to adjust future behavior.

• When a robotic arm is used in a factory, the action module gets instructions
on how to pick up and move things. It then sends signals to the robot's
motors to carry out the exact motions required to ensure precision assembly
jobs.
Components of AI Agent Architectures
• Learning Strategies: The Engine of Adaptation

This crucial element enables the agent to learn from experience and
adapt its behavior over time. Learning can occur through various
methods like supervised learning, unsupervised learning, or
reinforcement learning, allowing the agent to refine its decision-making
and improve performance.
Agent Autonomy
• Agent autonomy refers to the degree to which an AI agent can operate
independently and make decisions without direct human intervention. It's a key
characteristic of autonomous agents, which are designed to perceive their
environment, reason about goals, and take actions to achieve those goals without
constant supervision.
Agent Autonomy
Reinforcement Learning

• Reinforcement learning (RL) is a crucial component of agentic


systems, enabling them to learn and improve their behavior through
interaction with an environment. In RL, an agent takes actions,
receives feedback (rewards or penalties), and adjusts its strategy to
maximize long-term rewards. This learning process allows agentic
systems to become more autonomous and adapt to dynamic
environments.
Reinforcement Learning
• Key components of RL include:
• Agent: The decision-maker
(e.g., a robot, a trading bot).
• Environment: The world the agent interacts with
(e.g., a factory floor, a stock market).
• Actions: The set of choices available to the agent.
• Reward Signal: Feedback that guides the agent’s
learning.
• Policy: A strategy that the agent uses to decide
actions based on its current state.
https://www.computer.org/publications/tech-news/community-voices/reinforcement-learning-in-agentic-systems
Reinforcement Learning
• Agentic systems are designed to exhibit autonomy, adaptability, and
reasoning capabilities through interaction with their environment.
Reinforcement Learning (RL) has emerged as a crucial paradigm for
enhancing these systems’ capabilities in several key dimensions

https://www.computer.org/publications/tech-news/community-voices/reinforcement-learning-in-agentic-systems
Reinforcement Learning
• Adapting to Dynamic Environments: Agents learn optimal behaviors
even in non-stationary and uncertain conditions through continuous
interaction and feedback.
• Recent advances in deep RL have enabled agents to handle complex,
high-dimensional state spaces and adapt to changing circumstances
(Mnih et al., 2015).
• For example, in robotic applications, RL agents have demonstrated the
ability to learn and adjust manipulation strategies in real-time based on
environmental feedback (Levine et al., 2016).

https://www.computer.org/publications/tech-news/community-voices/reinforcement-learning-in-agentic-systems
Reinforcement Learning
• Scalability: Multi-agent RL (MARL) enables collaboration and competition
among agents in large-scale environments.
• Research has shown that MARL can effectively handle scenarios with
hundreds of agents, making it suitable for applications like traffic control
systems and supply chain optimization (Zhang et al., 2021).
• The emergence of techniques like centralized training with decentralized
execution (CTDE) has further improved the scalability of multi-agent
systems (Lowe et al., 2017).

https://www.computer.org/publications/tech-news/community-voices/reinforcement-learning-in-agentic-systems
Reinforcement Learning
• Long-Term Planning: RL allows agents to plan sequences of actions,
optimizing for long-term gains over short-term rewards. Hierarchical RL
approaches have proven particularly effective for temporal abstraction
and planning (Bacon et al., 2017).
• These methods enable agents to learn complex behaviors by
decomposing tasks into subtasks and developing temporally extended
action policies.

https://www.computer.org/publications/tech-news/community-voices/reinforcement-learning-in-agentic-systems
Industrial Applications of RL in Agentic Systems
• 1. Supply Chain Optimization
• Amazon’s Robotics and Warehousing: Amazon utilizes reinforcement learning (RL)-powered
agents in its fulfillment centers to optimize robot movements for inventory management. These
agents adapt to real-time demand fluctuations, minimize travel distances for picking and
packing, and enhance warehouse efficiency by over 20% while reducing operational costs.

• Amazon’s Robotics and Warehousing division deploys more than 350,000 mobile robots, making
it one of the largest-scale applications of RL in logistics. This system includes Kiva robots using RL
algorithms for dynamic path planning, which reduces order processing time by 50%, and
adaptive scheduling systems that adjust to demand spikes during events like Prime Day (Amazon,
2022).
https://www.computer.org/publications/tech-news/community-voices/reinforcement-learning-in-agentic-systems
Industrial Applications of RL in Agentic Systems
• 2. Autonomous Vehicles

• Waymo’s Self-Driving Cars: Waymo’s autonomous vehicles have driven over 20


million miles in real-world conditions, leveraging advanced reinforcement learning
(RL) systems for safety-critical decision-making in complex scenarios. These vehicles
optimize trajectories in real-time using their SimulationCity platform and employ
multi-agent learning to predict traffic interactions.

• Research shows that Waymo’s system achieves a disengagement rate of only 0.076
per 1,000 miles, significantly outperforming human drivers (Waymo Safety Report,
2021).
https://www.computer.org/publications/tech-news/community-voices/reinforcement-learning-in-agentic-systems
Industrial Applications of RL in Agentic Systems
• 3. Energy Management

• Google DeepMind’s Data Center Cooling: Google DeepMind has


implemented reinforcement learning (RL) to optimize data center cooling,
achieving a 40% reduction in cooling energy costs and a consistent 15%
improvement in Power Usage Effectiveness (PUE).

• This system results in annual energy savings of hundreds of millions of


dollars. It processes over 120,000 potential actions every five minutes to
ensure optimal cooling efficiency (Gao, 2020).
https://www.computer.org/publications/tech-news/community-voices/reinforcement-learning-in-agentic-systems
Industrial Applications of RL in Agentic Systems
• 3. Energy Management

• Google DeepMind’s Data Center Cooling: Google DeepMind has


implemented reinforcement learning (RL) to optimize data center cooling,
achieving a 40% reduction in cooling energy costs and a consistent 15%
improvement in Power Usage Effectiveness (PUE).

• This system results in annual energy savings of hundreds of millions of


dollars. It processes over 120,000 potential actions every five minutes to
ensure optimal cooling efficiency (Gao, 2020).
https://www.computer.org/publications/tech-news/community-voices/reinforcement-learning-in-agentic-systems
Meta-cognition and self-adaptation in Agents:
• Metacognition in agentic AI refers to an AI system's ability to understand
and reason about its own cognitive processes, while self-adaptation is the
system's capacity to adjust its behavior based on this self-awareness to
improve performance.
• This combination allows AI agents to learn from experience, solve novel
problems, and operate more effectively in complex environments.

https://youtu.be/cyfS6APUbHU
Meta-cognition and self-adaptation in Agents:
• Meta-cognitive (“thinking about thinking”)
architecture is a two-layer system where:
• The Object-Level does the work, and the
Meta-Level makes sure it’s done well.
• Together, they create agents that don’t
just act — they learn how to learn and
improve how they act.
Meta-cognition and self-adaptation in Agents:
• Meta-Level (Self-Reflection)
• Function: Monitors, evaluates, and adjusts the Object-Level's behavior.
• Includes:
• Goal assessment
• Strategy evaluation
• Error detection
• Adaptation and learning

Example: The agent notices it's making repeated errors and decides to
change its approach.
Meta-cognition and self-adaptation in Agents:
• Object-Level (Task Execution)
• Function: Directly performs tasks and interacts with the environment.
This is the agent’s operational level where problem-solving, reasoning,
and actions happen.

• Example: Navigating, responding to queries, making decisions, etc.


Meta-cognition and self-adaptation in Agents:
• Environment
• Function: The real or virtual world the agent interacts with.
• Provides input and receives actions from the Object-Level.

• Example: Sensors feeding data to a robot;


• User queries in a chatbot environment.
Meta-cognition and self-adaptation in Agents:
• Bidirectional Flow
• Meta-Level ↔ Object-Level:
The Meta-Level monitors and modifies the Object-Level, while also
receiving feedback from it to improve future strategies
• Object-Level ↔ Environment:
The Object-Level perceives the environment and acts within it.

This architecture enables agents to:


• Self-monitor and adapt in complex or changing conditions
• Handle uncertainty and failure better
• Exhibit more robust, intelligent, and human-like behavior
Meta-cognition and self-adaptation in Agents:
• Self Adaptation in Agents
• Self-adaptation refers to an agent’s ability to adjust its behaviour and
strategies in response to changes in the environment or its own internal
state.
Key Mechanisms of Self-Adaptation:
• Environmental Feedback: Adjusting behaviour based on observed changes or
rewards.
• Policy Adjustment: Changing decision-making rules or heuristics.
• Learning from Experience: Using reinforcement learning, meta-learning, or online
learning.
• Resource Awareness: Adapting based on computational limits or energy
constraints.
Meta-cognition and self-adaptation in Agents:
• Self Adaptation in Agents
• Example:
• An autonomous drone encountering unexpected wind conditions adjusts
its flight path and control algorithms to maintain stability and safety.
Meta-cognition and self-adaptation in Agents:
Relation Between Meta-Cognition and Self-Adaptation
Meta-cognition enables self-adaptation by providing the mechanisms for:
• Assessing when adaptation is needed
• Selecting how to adapt (strategy-level decisions)
• Evaluating the success of the adaptation
Meta-cognition and self-adaptation in Agents:
Goal modelling and reasoning
Goal modeling defines what the agent wants to do.

Goal reasoning figures out how, when, and whether it should do it.

This capability is central to autonomous, intelligent, and purposeful


behavior in agentic AI systems.
Goal modelling and reasoning
Goal modelling and reasoning
Together, Goal Modeling + Reasoning Allow Agents to:

Act deliberately (not reactively)

Adapt to changes or failure

Pursue complex tasks in multi-step, multi-agent environments


Planning under Uncertainty

• Planning in AI refers to the process where an agent selects a sequence of


actions to transition from an initial state to a goal state.
• In real-world scenarios, environments are rarely perfect — they may be
uncertain, incomplete, or changing over time.
• To act effectively under such conditions, agents need to plan in a way that
accounts for risks, unknowns, and change.
Planning under Uncertainty
Challenges in Planning Under Uncertainty
Incomplete Knowledge Stochastic Environments Dynamic Changes

 The agent doesn’t have Actions have probabilistic The environment changes
full visibility into the outcomes — the same over time, possibly due to
environment or its current action may not always other agents or external
state. lead to the same result. factors
 Causes: sensor limitations, Requires modelling the • Plans may need to be re-
hidden variables, lack of likelihood of different evaluated and updated in
prior data. outcomes and choosing real time.
 Planning must incorporate actions that maximize • Example: A warehouse
assumptions or seek expected utility. robot’s path must adapt if
information to reduce Example: A drone may a human worker
uncertainty. attempt to land, but wind unexpectedly blocks its
• Example: A delivery conditions could cause it route.
robot doesn't know if a to miss the target slightly.
hallway is blocked until
it gets closer.
Planning under Uncertainty
Approaches to Planning Under Uncertainty

Probabilistic Planning (e.g.,


POMDPs)

Contingency Planning

Heuristic-Based Planning
Approaches to Planning under Uncertainty

Probabilistic Planning (e.g.,


POMDPs)

• POMDP (Partially Observable Markov Decision Process) is a formal framework


that models:
• Uncertain action outcomes
• Partial observability
• Sequential decision-making
• The agent maintains a belief state (a probability distribution over possible actual
states) and selects actions that maximize expected rewards.
• Use Case: Healthcare decision support systems where the patient’s true health
state isn't fully known, but actions like testing or treatment must be planned.
Approaches to Planning under Uncertainty

Contingency Planning

• Also known as "if-then" planning.


• The agent prepares branches in its plan to handle different possible future events.
• Especially useful in dynamic or partially known environments where the agent
cannot commit to a single fixed path.
• Example:
• "If the door is locked, then try the window. If the window is closed, then find
another entrance."
• Use Case: Rescue robots in disaster zones where obstacles or hazards are not fully
known ahead of time.
Approaches to Planning under Uncertainty

Heuristic-Based Planning

• Uses domain-specific heuristics (rules of thumb) to guide planning efficiently in


large or complex spaces.
• Focuses on reducing search complexity and prioritizing promising actions.
• Often used in real-time or constrained scenarios where full computation isn't
feasible.
• Example: A game-playing AI estimating that "moving toward the center" gives
better control of the board.
• Use Case: Navigation systems, game AI, and robotics where quick decisions are
needed.
Comparison of Approaches to Planning under Uncertainty

Approach Strengths Limitations

POMDPs Rigorous handling of Computationally


uncertainty and partial expensive, hard to scale
information
Contingency Flexible to Can become complex
Planning environmental changes with many branches
Heuristic Fast, scalable, and May not handle deep
Planning practical in large uncertainty or
environments guarantee optimality

You might also like