0% found this document useful (0 votes)
145 views10 pages

LangChain Cheat Sheet

Uploaded by

pvshereya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
145 views10 pages

LangChain Cheat Sheet

Uploaded by

pvshereya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

8/25/25, 9:26 PM about:blank

Build Smarter AI Apps: Empower LLMs with LangChain


Module Cheat Sheet: Introduction to LangChain in GenAI

Package/Method Description Code Example

from ibm_watsonx_ai.foundation_models import ModelInference


from ibm_watson_machine_learning.foundation_models.[Link] import WatsonxLLM

model_id = ‘mistralai/mixtral-8x7b-instruct-v01’
parameters = {
GenParams.MAX_NEW_TOKENS: 256,
[Link]: 0.2,
}
credentials = {“url”: “[Link]
project_id = “skills-network”

model = ModelInference(
model_id=model_id,
A class from the params=parameters,
ibm_watson_machine_learning.foundation_models.[Link] credentials=credentials,
WatsonxLLM project_id=project_id
module that creates a LangChain compatible wrapper around IBM's )
[Link] models.
mixtral_llm = WatsonxLLM(model=model)
response = mixtral_llm.invoke(“Who is man’s best friend?”)

from langchain_core.messages import HumanMessage, SystemMessage, AIMessage

msg = mixtral_llm.invoke([
SystemMessage(content=”You are a helpful AI bot that assists a user in choosing the perfect boo
HumanMessage(content=”I enjoy mystery novels, what should I read?”)
])

Different types of messages that chat models can use to provide context
Message Types and control the conversation. The most common message types are
SystemMessage, HumanMessage, and AIMessage.

PromptTemplate A class from the langchain_core.prompts module that helps format from langchain_core.prompts import PromptTemplate
prompts with variables. These templates allow you to define a consistent
prompt = PromptTemplate.from_template(“Tell me one {adjective} joke about {topic}”)
format while leaving placeholders for variables that change with each input_ = {“adjective”: “funny”, “topic”: “cats”}
use case.
formatted_prompt = [Link](input_)

about:blank 1/10
8/25/25, 9:26 PM about:blank

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages([
(“system”, “You are a helpful assistant”),
(“user”, “Tell me a joke about {topic}”)
])

input_ = {“topic”: “cats”}


A class from the langchain_core.prompts module that formats a list of formatted_messages = [Link](input_)
ChatPromptTemplate chat messages with variables. These templates consist of a list of
message templates themselves.

from langchain_core.prompts import MessagesPlaceholder


from langchain_core.messages import HumanMessage

prompt = ChatPromptTemplate.from_messages([
(“system”, “You are a helpful assistant”),
MessagesPlaceholder(“msgs”)
])

input_ = {“msgs”: [HumanMessage(content=”What is the day after Tuesday?”)]}


A placeholder that allows you to add a list of messages to a specific spot formatted_messages = [Link](input_)
MessagesPlaceholder in a ChatPromptTemplate. This capability is useful when you want the
user to pass in a list of messages you would slot into a particular spot.

JsonOutputParser A parser that allows users to specify an arbitrary JSON schema and from langchain_core.output_parsers import JsonOutputParser
query LLMs for outputs that conform to that schema. A parser is useful from langchain_core.pydantic_v1 import BaseModel, Field
for obtaining structured data from LLMs. class Joke(BaseModel):
setup: str = Field(description=”question to set up a joke”)
punchline: str = Field(description=”answer to resolve the joke”)

output_parser = JsonOutputParser(pydantic_object=Joke)

format_instructions = output_parser.get_format_instructions()
prompt = PromptTemplate(
template=”Answer the user query.\n{format_instructions}\n{query}\n”,
input_variables=[“query”],
partial_variables={“format_instructions”: format_instructions},
)

chain = prompt | mixtral_llm | output_parser

about:blank 2/10
8/25/25, 9:26 PM about:blank

from langchain.output_parsers import CommaSeparatedListOutputParser

output_parser = CommaSeparatedListOutputParser()

format_instructions = output_parser.get_format_instructions()
prompt = PromptTemplate(
template=”Answer the user query. {format_instructions}\nList five {subject}.”,
input_variables=[“subject”],
partial_variables={“format_instructions”: format_instructions},
)

A parser used to return a list of comma-separated items. This parser chain = prompt | mixtral_llm | output_parser
CommaSeparatedListOutputParser result = [Link]({“subject”: “ice cream flavors”})
converts the LLM's response into a Python list.

from langchain_core.documents import Document

doc = Document(
page_content=”””Python is an interpreted high-level general-purpose programming language.
Python’s design philosophy emphasizes code readability with its notable use of
metadata={
‘my_document_id’ : 234234,
‘my_document_source’ : “About Python”,
‘my_document_create_time’ : 1680013019
A class from the langchain_core.documents module that contains }
information about some data. This class has the following two attributes: )
Document
page_content (the content of the document) and metadata (arbitrary
metadata associated with the document).

from langchain_community.document_loaders import PyPDFLoader

loader = PyPDFLoader(“path/to/[Link]”)
documents = [Link]()

A document loader from the langchain_community.document_loaders


PyPDFLoader that loads PDFs into Document objects. You can use this document
loader to extract text content from PDF files.

about:blank 3/10
8/25/25, 9:26 PM about:blank
from langchain_community.document_loaders import WebBaseLoader

loader = WebBaseLoader(“[Link]
web_data = [Link]()

A document loader from the langchain_community.document_loaders


WebBaseLoader that loads content from websites into Document objects. You can use
this document loader to extract text content from web pages.

from langchain.text_splitter import CharacterTextSplitter

text_splitter = CharacterTextSplitter(
chunk_size=200, # Maximum size of each chunk
chunk_overlap=20, # Number of characters to overlap between chunks
separator=”\n” # Character to split on
)
chunks = text_splitter.split_documents(documents)
A text splitter from langchain.text_splitter that splits text into chunks
CharacterTextSplitter based on characters. This splitter is useful for breaking long documents
into smaller, more manageable chunks for processing with LLMs.

from langchain.text_splitter import RecursiveCharacterTextSplitter

text_splitter = RecursiveCharacterTextSplitter(
chunk_size=500,
chunk_overlap=50,
separators=[“\n\n”, “\n”, “. “, “ “, “”]
)
A text splitter from langchain.text_splitter that splits text recursively chunks = text_splitter.split_documents(documents)
based on a list of separators. This splitter tries to split on the first
RecursiveCharacterTextSplitter
separator, then the second separator, and any subsequent separators,
until the chunks of text attain the specified size.

WatsonxEmbeddings A class from langchain_ibm that creates embeddings (vector from langchain_ibm import WatsonxEmbeddings
representations) of text using IBM's [Link] embedding models. You from ibm_watsonx_ai.metanames import EmbedTextParamsMetaNames
can use these embeddings for semantic search and other vector-based embed_params = {
operations. EmbedTextParamsMetaNames.TRUNCATE_INPUT_TOKENS: 3,
EmbedTextParamsMetaNames.RETURN_OPTIONS: {“input_text”: True},
}

watsonx_embedding = WatsonxEmbeddings(
model_id=”ibm/slate-125m-english-rtrvr”,
url=”[Link]
project_id=”skills-network”,
params=embed_params,
)

about:blank 4/10
8/25/25, 9:26 PM about:blank

from [Link] import Chroma

// Create a vector store from documents


docsearch = Chroma.from_documents(chunks, watsonx_embedding)

// Perform a similarity search


query = “Langchain”
docs = docsearch.similarity_search(query)
A vector store from [Link] that stores embeddings and
Chroma provides methods for similarity search. You can use Chroma for storing
and retrieving documents based on semantic similarity.

# Convert a vector store to a retriever


retriever = docsearch.as_retriever()

// Retrieve documents
docs = [Link](“Langchain”)

Interfaces that return documents given an unstructured query. Retrievers


Retrievers accept a string query as input and return a list of Document objects as
output. You can use vector stores as the backbone of a retriever.

ParentDocumentRetriever A retriever from [Link] that splits documents into small from [Link] import ParentDocumentRetriever
chunks for embedding but returns the parent documents during retrieval. from [Link] import InMemoryStore
This retriever balances accurate embeddings with context preservation. parent_splitter = CharacterTextSplitter(chunk_size=2000, chunk_overlap=20)
child_splitter = CharacterTextSplitter(chunk_size=400, chunk_overlap=20)

vectorstore = Chroma(
collection_name=”split_parents”,
embedding_function=watsonx_embedding
)

store = InMemoryStore()

retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter,
)

retriever.add_documents(documents)
retrieved_docs = [Link](“Langchain”)

about:blank 5/10
8/25/25, 9:26 PM about:blank

from [Link] import RetrievalQA

qa = RetrievalQA.from_chain_type(
llm=mixtral_llm,
chain_type=”stuff”,
retriever=docsearch.as_retriever(),
return_source_documents=False
)

A chain from [Link] that answers questions based on retrieved query = “what is this paper discussing?”
answer = [Link](query)
RetrievalQA documents. The RetrievalQA chain combines a retriever with an LLM to
generate answers based on the retrieved context.

from [Link] import ChatMessageHistory

history = ChatMessageHistory()

history.add_ai_message(“hi!”)
history.add_user_message(“what is the capital of France?”)

// Access the messages


[Link]
A lightweight wrapper from [Link] that provides convenient
// Generate a response using the history
methods for saving HumanMessages, AIMessages, and then fetching ai_response = mixtral_llm.invoke([Link])
ChatMessageHistory
them all. You can use the ChatMessageHistory wrapper to maintain
conversation history.

ConversationBufferMemory A memory module from [Link] that allows for the storage of from [Link] import ConversationBufferMemory
messages and conversation history. You can use this memory module from [Link] import ConversationChain
conversation chains to maintain context across multiple interactions. conversation = ConversationChain(
llm=mixtral_llm,
verbose=True,
memory=ConversationBufferMemory()
)

response = [Link](input=”Hello, I am a little cat. Who are you?”)

about:blank 6/10
8/25/25, 9:26 PM about:blank

from [Link] import LLMChain

template = “””Your job is to come up with a classic dish from the area that the users suggests.
{location}

YOUR RESPONSE:

“””
prompt_template = PromptTemplate(template=template, input_variables=[‘location’])

location_chain = LLMChain(
llm=mixtral_llm,
prompt=prompt_template,
A basic chain from [Link] that combines a prompt template output_key=’meal’
LLMChain )
with an LLM. It's the simplest form of chain in LangChain.
result = location_chain.invoke(input={‘location’:’China’})

SequentialChain A chain from [Link] that combines multiple chains in from [Link] import SequentialChain
sequence, where the output of one chain becomes the input for the next // First chain - gets a meal based on location
chain. SequentialChain is useful for multi-step processing. location_chain = LLMChain(
llm=mixtral_llm,
prompt=location_prompt_template,
output_key=’meal’
)

// Second chain - gets a recipe based on meal


dish_chain = LLMChain(
llm=mixtral_llm,
prompt=dish_prompt_template,
output_key=’recipe’
)

// Third chain - estimates cooking time


recipe_chain = LLMChain(
llm=mixtral_llm,
prompt=recipe_prompt_template,
output_key=’time’
)

// Combine into sequential chain


overall_chain = SequentialChain(
chains=[location_chain, dish_chain, recipe_chain],
input_variables=[‘location’],
output_variables=[‘meal’, ‘recipe’, ‘time’],
verbose=True
)

about:blank 7/10
8/25/25, 9:26 PM about:blank

from langchain_core.runnables import RunnablePassthrough

// Create each individual chain with the pipe operator


location_chain_lcel = (
PromptTemplate.from_template(location_template)
| mixtral_llm
| StrOutputParser()
)

dish_chain_lcel = (
PromptTemplate.from_template(dish_template)
| mixtral_llm
| StrOutputParser()
)

time_chain_lcel = (
PromptTemplate.from_template(time_template)
| mixtral_llm
| StrOutputParser()
)
A component from langchain_core.runnables that allows function
RunnablePassthrough chaining to use the 'assign' method, enabling structured multi-step overall_chain_lcel = (
processing. [Link](meal=lambda x: location_chain_lcel.invoke({“location”: x[“location”]
| [Link](recipe=lambda x: dish_chain_lcel.invoke({“meal”: x[“meal”]}))
| [Link](time=lambda x: time_chain_lcel.invoke({“recipe”: x[“recipe”]}))
)

// Run the chain


result = overall_chain_lcel.invoke({“location”: “China”})
pprint(result)

Tool A class from langchain_core.tools that represents an interface that an from langchain_core.tools import Tool
agent, chain, or LLM can use to interact with the world. Tools perform from langchain_experimental.utilities import PythonREPL
specific tasks like calculations and data retrieval. python_repl = PythonREPL()

python_calculator = Tool(
name=”Python Calculator”,
func=python_repl.run,
description=”Useful for when you need to perform calculations or execute Python code. Input sho
)

result = python_calculator.invoke(“a = 3; b = 1; print(a+b)”)

about:blank 8/10
8/25/25, 9:26 PM about:blank

from [Link] import tool

@tool
def search_weather(location: str):
“””Search for the current weather in the specified location.”””
# In a real application, this function would call a weather API
return f”The weather in {location} is currently sunny and 72°F.”

A decorator from [Link] that simplifies the creation of custom


@tool decorator
tools. This tool automatically converts a function into a Tool object.

from [Link] import create_react_agent

agent = create_react_agent(
llm=mixtral_llm,
tools=tools,
prompt=prompt
)
A function from [Link] that creates an agent following the
ReAct (Reasoning + Acting) framework. This function takes an LLM, a
create_react_agent
list of tools, and a prompt template as input and returns an agent that can
reason and select tools to accomplish tasks.

from [Link] import AgentExecutor

agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
handle_parsing_errors=True
)

A class from [Link] that manages the execution flow of an result = agent_executor.invoke({“input”: “What is the square root of 256?”})
AgentExecutor agent. This class handles the orchestration between the agent's reasoning
and the actual tool execution.

Author
Hailey Quach

about:blank 9/10
8/25/25, 9:26 PM about:blank

about:blank 10/10

You might also like