Cookbook Openai Com Examples gpt4 1 - Prompting - Guide...
Cookbook Openai Com Examples gpt4 1 - Prompting - Guide...
The GPT-4.1 family of models represents a significant step forward from GPT-4o in
capabilities across coding, instruction following, and long context. In this prompting guide,
we collate a series of important prompting tips derived from extensive internal testing to
help developers fully leverage the improved abilities of this new model family.
Many typical best practices still apply to GPT-4.1, such as providing context examples,
making instructions as specific and clear as possible, and inducing planning via prompting
to maximize model intelligence. However, we expect that getting the most out of this
model will require some prompt migration. GPT-4.1 is trained to follow instructions more
closely and more literally than its predecessors, which tended to more liberally infer intent
from user and system prompts. This also means, however, that GPT-4.1 is highly steerable
and responsive to well-specified prompts - if model behavior is different from what you
expect, a single sentence firmly and unequivocally clarifying your desired behavior is
almost always sufficient to steer the model on course.
Please read on for prompt examples you can use as a reference, and remember that while
this guidance is widely applicable, no advice is one-size-fits-all. AI engineering is
inherently an empirical discipline, and large language models are inherently
nondeterministic; in addition to following this guide, we advise building informative evals
and iterating often to ensure your prompt engineering changes are yielding benefits for
your use case.
1. Agentic Workflows
GPT-4.1 is a great place to build agentic workflows. In model training we emphasized
providing a diverse range of agentic problem-solving trajectories, and our agentic harness
for the model achieves state-of-the-art performance for non-reasoning models on SWE-
bench Verified, solving 55% of problems.
You are an agent - please keep going until the user’s query is completely resolved, before endi
2. Tool-calling: this encourages the model to make full use of its tools, and reduces its
likelihood of hallucinating or guessing an answer. Our example is the following:
If you are not sure about file content or codebase structure pertaining to the user’s request,
3. Planning [optional]: if desired, this ensures the model explicitly plans and reflects
upon each tool call in text, instead of completing the task by chaining together a
series of only tool calls. Our example is the following:
You MUST plan extensively before each function call, and reflect extensively on the outcomes of
GPT-4.1 is trained to respond very closely to both user instructions and system prompts in
the agentic setting. The model adhered closely to these three simple instructions and
increased our internal SWE-bench Verified score by close to 20% - so we highly
encourage starting any agent prompt with clear reminders covering the three categories
listed above. As a whole, we find that these three instructions transform the model from a
chatbot-like state into a much more “eager” agent, driving the interaction forward
autonomously and independently.
Tool Calls
Compared to previous models, GPT-4.1 has undergone more training on effectively
utilizing tools passed as arguments in an OpenAI API request. We encourage developers
to exclusively use the tools field to pass tools, rather than manually injecting tool
descriptions into your prompt and writing a separate parser for tool calls, as some have
reported doing in the past. This is the best way to minimize errors and ensure the model
remains in distribution during tool-calling trajectories - in our own experiments, we
observed a 2% increase in SWE-bench Verified pass rate when using API-parsed tool
descriptions versus manually injecting the schemas into the system prompt.
Developers should name tools clearly to indicate their purpose and add a clear, detailed
description in the "description" field of the tool. Similarly, for each tool param, lean on good
naming and descriptions to ensure appropriate usage. If your tool is particularly
complicated and you'd like to provide examples of tool usage, we recommend that you
create an # Examples section in your system prompt and place the examples there, rather
than adding them into the "description' field, which should remain thorough but relatively
concise. Providing examples can be helpful to indicate when to use tools, whether to
include user text alongside tool calls, and what parameters are appropriate for different
inputs. Remember that you can use “Generate Anything” in the Prompt Playground to get
a good starting point for your new tool definitions.
client = OpenAI(
api_key=os.environ.get(
"OPENAI_API_KEY", "<your OpenAI API key if not set as env var>"
)
)
SYS_PROMPT_SWEBENCH = """
You will be tasked to fix an issue from an open-source repository.
Your thinking should be thorough and so it's fine if it's very long. You can think step by
You MUST iterate and keep going until the problem is solved.
You already have everything you need to solve this problem in the /testbed folder, even wit
Only terminate your turn when you are sure that the problem is solved. Go through the probl
Take your time and think through every step - remember to check your solution rigorously an
You MUST plan extensively before each function call, and reflect extensively on the outcome
# Workflow
1. Understand the problem deeply. Carefully read the issue and think critically about what
2. Investigate the codebase. Explore relevant files, search for key functions, and gather c
3. Develop a clear, step-by-step plan. Break down the fix into manageable, incremental step
4. Implement the fix incrementally. Make small, testable code changes.
5. Debug as needed. Use debugging techniques to isolate and resolve issues.
6. Test frequently. Run tests after each change to verify correctness.
7. Iterate until the root cause is fixed and all tests pass.
8. Reflect and validate comprehensively. After tests pass, think about the original intent,
Refer to the detailed sections below for more information on each step.
## 2. Codebase Investigation
- Explore relevant files and directories.
- Search for key functions, classes, or variables related to the issue.
- Read and understand relevant code snippets.
- Identify the root cause of the problem.
- Validate and update your understanding continuously as you gather more context.
## 5. Debugging
- Make code changes only if you have high confidence they can solve the problem
- When debugging, try to determine the root cause rather than addressing symptoms
- Debug for as long as needed to identify the root cause and identify a fix
- Use print statements, logs, or temporary code to inspect program state, including descrip
- To test hypotheses, you can also add test statements or functions
- Revisit your assumptions if unexpected behavior occurs.
## 6. Testing
- Run tests frequently using `!python3 run_tests.py` (or equivalent).
- After each change, verify correctness by running relevant tests.
- If tests fail, analyze failures and revise your patch.
- Write additional tests if needed to capture important behaviors or edge cases.
- Ensure all tests pass before finalizing.
## 7. Final Verification
- Confirm the root cause is fixed.
- Review your solution for logic correctness and robustness.
- Iterate until you are extremely confident the fix is complete and all tests pass.
In addition, for the purposes of this task, you can call this function with an `apply_patch
%%bash
apply_patch <<"EOF"
*** Begin Patch
[YOUR_PATCH]
*** End Patch
EOF
Where [YOUR_PATCH] is the actual content of your patch, specified in the following V4A diff
*** [ACTION] File: [path/to/file] -> ACTION can be one of Add, Update, or Delete.
For each snippet of code that needs to be changed, repeat the following:
[context_before] -> See below for further instructions on context.
- [old_code] -> Precede the old code with a minus sign.
+ [new_code] -> Precede the new, replacement code with a plus sign.
[context_after] -> See below for further instructions on context.
- If a code block is repeated so many times in a class or function such that even a single
@@ class BaseClass
@@ def method():
[3 lines of pre-context]
- [old_code]
+ [new_code]
[3 lines of post-context]
Note, then, that we do not use line numbers in this diff format, as the context is enough t
%%bash
apply_patch <<"EOF"
*** Begin Patch
*** Update File: pygorithm/searching/binary_search.py
@@ class BaseClass
@@ def search():
- pass
+ raise NotImplementedError()
@@ class Subclass
@@ def search():
- pass
+ raise NotImplementedError()
File references can only be relative, NEVER ABSOLUTE. After the apply_patch command is run,
"""
python_bash_patch_tool = {
"type": "function",
"name": "python",
"description": PYTHON_TOOL_DESCRIPTION,
"parameters": {
"type": "object",
"strict": True,
"properties": {
"input": {
"type": "string",
"description": " The Python code, terminal command (prefaced by exclamation m
}
},
"required": ["input"],
},
}
response = client.responses.create(
instructions=SYS_PROMPT_SWEBENCH,
model="gpt-4.1-2025-04-14",
tools=[python_bash_patch_tool],
input=f"Please answer the following question:\nBug: Typerror..."
)
response.to_dict()["output"]
[{'id': 'msg_67fe92df26ac819182ffafce9ff4e4fc07c7e06242e51f8b',
'content': [{'annotations': [],
'text': "Thank you for the report, but “Typerror” is too vague for me to start deb
'type': 'output_text'}],
'role': 'assistant',
'status': 'completed',
'type': 'message'},
{'arguments': '{"input":"!ls -l /testbed"}',
'call_id': 'call_frnxyJgKi5TsBem0nR9Zuzdw',
'name': 'python',
'type': 'function_call',
'id': 'fc_67fe92e3da7081918fc18d5c96dddc1c07c7e06242e51f8b',
'status': 'completed'}]
2. Long context
GPT-4.1 has a performant 1M token input context window, and is useful for a variety of long
context tasks, including structured document parsing, re-ranking, selecting relevant
information while ignoring irrelevant context, and performing multi-hop reasoning using
context.
# Instructions
// for internal knowledge
- Only use the documents in the provided External Context to answer the User Query. If you don'
// For internal and external knowledge
- By default, use the provided external context to answer the User Query, but if other basic kn
Prompt Organization
Especially in long context usage, placement of instructions and context can impact
performance. If you have long context in your prompt, ideally place your instructions at
both the beginning and end of the provided context, as we found this to perform better
than only above or below. If you’d prefer to only have your instructions once, then above
the provided context works better than below.
3. Chain of Thought
As mentioned above, GPT-4.1 is not a reasoning model, but prompting the model to think
step by step (called “chain of thought”) can be an effective way for a model to break down
problems into more manageable pieces, solve them, and improve overall output quality,
with the tradeoff of higher cost and latency associated with using more output tokens.
The model has been trained to perform well at agentic reasoning about and real-world
problem solving, so it shouldn’t require much prompting to perform well.
We recommend starting with this basic chain-of-thought instruction at the end of your
prompt:
...
First, think carefully step by step about what documents are needed to answer the query. Then,
From there, you should improve your chain-of-thought (CoT) prompt by auditing failures in
your particular examples and evals, and addressing systematic planning and reasoning
errors with more explicit instructions. In the unconstrained CoT prompt, there may be
variance in the strategies it tries, and if you observe an approach that works well, you can
codify that strategy in your prompt. Generally speaking, errors tend to occur from
misunderstanding user intent, insufficient context gathering or analysis, or insufficient or
incorrect step by step thinking, so watch out for these and try to address them with more
opinionated instructions.
Here is an example prompt instructing the model to focus more methodically on analyzing
user intent and considering relevant context before proceeding to answer.
# Reasoning Strategy
1. Query Analysis: Break down and analyze the query until you're confident about what it might
2. Context Analysis: Carefully select and analyze a large set of potentially relevant documents
a. Analysis: An analysis of how it may or may not be relevant to answering the query.
b. Relevance rating: [high, medium, low, none]
3. Synthesis: summarize which documents are most relevant and why, including all documents with
# User Question
{user_question}
# External Context
{external_context}
First, think carefully step by step about what documents are needed to answer the query, closel
4. Instruction Following
GPT-4.1 exhibits outstanding instruction-following performance, which developers can
leverage to precisely shape and control the outputs for their particular use cases.
Developers often extensively prompt for agentic reasoning steps, response tone and voice,
tool calling information, output formatting, topics to avoid, and more. However, since the
model follows instructions more literally, developers may need to include explicit
specification around what to do or not to do. Furthermore, existing prompts optimized for
other models may not immediately work with this model, because existing instructions are
followed more closely and implicit rules are no longer being as strongly inferred.
Recommended Workflow
Here is our recommended workflow for developing and debugging instructions in
prompts:
Note that using your preferred AI-powered IDE can be very helpful for iterating on
prompts, including checking for consistency or conflicts, adding examples, or making
cohesive updates like adding an instruction and updating instructions to demonstrate that
instruction.
Try running the following notebook cell - you should see both a user message and tool call,
and the user message should start with a greeting, then echo back their answer, then
mention they're about to call a tool. Try changing the instructions to shape the model
behavior, or trying other user messages, to test instruction following performance.
SYS_PROMPT_CUSTOMER_SERVICE = """You are a helpful customer service agent working for NewTe
# Instructions
- Always greet the user with "Hi, you've reached NewTelco, how can I help you?"
- Always call a tool before answering factual questions about the company, its offerings or
- However, if you don't have enough information to properly call the tool, ask the user
- Escalate to a human if the user requests.
- Do not discuss prohibited topics (politics, religion, controversial current events, medic
- Rely on sample phrases whenever appropriate, but never repeat a sample phrase in the same
- Always follow the provided output format for new messages, including citations for any fa
- If you're going to call a tool, always message the user with an appropriate message befor
- Maintain a professional and concise tone in all responses, and use emojis between sentenc
- If you've resolved the user's request, ask if there's anything else you can help with
# Sample Phrases
## Deflecting a Prohibited Topic
- "I'm sorry, but I'm unable to discuss that topic. Is there something else I can help you
- "That's not something I'm able to provide information on, but I'm happy to help with any
# Output Format
- Always include your final response to the user.
- When providing factual information from retrieved context, always include citations immed
- For a single source: [NAME](ID)
- For multiple sources: [NAME](ID), [NAME](ID)
- Only provide information about this company, its policies, its products, or the customer'
# Example
## User
Can you tell me about your family plan options?
## Assistant Response 1
### Message
"Hi, you've reached NewTelco, how can I help you? 😊🎉\n\nYou'd like to know about our fam
get_policy_doc = {
"type": "function",
"name": "lookup_policy_document",
"description": "Tool to look up internal documents and policies by topic or keyword.",
"parameters": {
"strict": True,
"type": "object",
"properties": {
"topic": {
"type": "string",
"description": "The topic or keyword to search for in company policies or d
},
},
"required": ["topic"],
"additionalProperties": False,
},
}
get_user_acct = {
"type": "function",
"name": "get_user_account_info",
"description": "Tool to get user account information",
"parameters": {
"strict": True,
"type": "object",
"properties": {
"phone_number": {
"type": "string",
"description": "Formatted as '(xxx) xxx-xxxx'",
},
},
"required": ["phone_number"],
"additionalProperties": False,
},
}
response = client.responses.create(
instructions=SYS_PROMPT_CUSTOMER_SERVICE,
model="gpt-4.1-2025-04-14",
tools=[get_policy_doc, get_user_acct],
input="How much will it cost for international service? I'm traveling to France.",
# input="Why was my last bill so high?"
)
response.to_dict()["output"]
[{'id': 'msg_67fe92d431548191b7ca6cd604b4784b06efc5beb16b3c5e',
'content': [{'annotations': [],
'text': "Hi, you've reached NewTelco, how can I help you? 🌍✈️\n\nYou'd like to kn
'type': 'output_text'}],
'role': 'assistant',
'status': 'completed',
'type': 'message'},
{'arguments': '{"topic":"international service cost France"}',
'call_id': 'call_cF63DLeyhNhwfdyME3ZHd0yo',
'name': 'lookup_policy_document',
'type': 'function_call',
'id': 'fc_67fe92d5d6888191b6cd7cf57f707e4606efc5beb16b3c5e',
'status': 'completed'}]
5. General Advice
Prompt Structure
For reference, here is a good starting point for structuring your prompts.
Add or remove sections to suit your needs, and experiment to determine what’s optimal
for your usage.
Delimiters
Here are some general guidelines for selecting the best delimiters for your prompt. Please
refer to the Long Context section for special considerations for that context type.
1. Markdown: We recommend starting here, and using markdown titles for major
sections and subsections (including deeper hierarchy, to H4+). Use inline backticks or
backtick blocks to precisely wrap code, and standard numbered or bulleted lists as
needed.
2. XML: These also perform well, and we have improved adherence to information in
XML with this model. XML is convenient to precisely wrap a section including start
and end, add metadata to the tags for additional context, and enable nesting. Here is
an example of using XML tags to nest examples in an example section, with inputs
and outputs for each:
<examples>
<example1 type="Abbreviate">
<input>San Francisco</input>
<output>- SF</output>
</example1>
</examples>
3. JSON is highly structured and well understood by the model particularly in coding
contexts. However it can be more verbose, and require character escaping that can
add overhead.
Guidance specifically for adding a large number of documents or files to input context:
This format, proposed by Lee et al. (ref), also performed well in our long context
testing.
Example: ID: 1 | TITLE: The Fox | CONTENT: The quick brown fox jumps over
Apply Patch
See the example below for a prompt that applies our recommended tool call correctly.
%%bash
apply_patch <<"EOF"
*** Begin Patch
[YOUR_PATCH]
*** End Patch
EOF
Where [YOUR_PATCH] is the actual content of your patch, specified in the following V4A diff
*** [ACTION] File: [path/to/file] -> ACTION can be one of Add, Update, or Delete.
For each snippet of code that needs to be changed, repeat the following:
[context_before] -> See below for further instructions on context.
- [old_code] -> Precede the old code with a minus sign.
+ [new_code] -> Precede the new, replacement code with a plus sign.
[context_after] -> See below for further instructions on context.
- If a code block is repeated so many times in a class or function such that even a single
@@ class BaseClass
@@ def method():
[3 lines of pre-context]
- [old_code]
+ [new_code]
[3 lines of post-context]
Note, then, that we do not use line numbers in this diff format, as the context is enough t
%%bash
apply_patch <<"EOF"
*** Begin Patch
*** Update File: pygorithm/searching/binary_search.py
@@ class BaseClass
@@ def search():
- pass
+ raise NotImplementedError()
@@ class Subclass
@@ def search():
- pass
+ raise NotImplementedError()
APPLY_PATCH_TOOL = {
"name": "apply_patch",
"description": APPLY_PATCH_TOOL_DESC,
"parameters": {
"type": "object",
"properties": {
"input": {
"type": "string",
"description": " The apply_patch command that you wish to execute.",
}
},
"required": ["input"],
},
}
#!/usr/bin/env python3
"""
A self-contained **pure-Python 3.9+** utility for applying human-readable
“pseudo-diff” patch files to a collection of text files.
"""
import pathlib
from dataclasses import dataclass, field
from enum import Enum
from typing import (
Callable,
Dict,
List,
Optional,
Tuple,
Union,
)
# --------------------------------------------------------------------------- #
# Domain objects
# --------------------------------------------------------------------------- #
class ActionType(str, Enum):
ADD = "add"
DELETE = "delete"
UPDATE = "update"
@dataclass
class FileChange:
type: ActionType
old_content: Optional[str] = None
new_content: Optional[str] = None
move_path: Optional[str] = None
@dataclass
class Commit:
changes: Dict[str, FileChange] = field(default_factory=dict)
# --------------------------------------------------------------------------- #
# Exceptions
# --------------------------------------------------------------------------- #
class DiffError(ValueError):
"""Any problem detected while parsing or applying a patch."""
# --------------------------------------------------------------------------- #
# Helper dataclasses used while parsing patches
# --------------------------------------------------------------------------- #
@dataclass
class Chunk:
orig_index: int = -1
del_lines: List[str] = field(default_factory=list)
ins_lines: List[str] = field(default_factory=list)
@dataclass
class PatchAction:
type: ActionType
new_file: Optional[str] = None
chunks: List[Chunk] = field(default_factory=list)
move_path: Optional[str] = None
@dataclass
class Patch:
actions: Dict[str, PatchAction] = field(default_factory=dict)
# --------------------------------------------------------------------------- #
# Patch text parser
# --------------------------------------------------------------------------- #
@dataclass
class Parser:
current_files: Dict[str, str]
lines: List[str]
index: int = 0
patch: Patch = field(default_factory=Patch)
fuzz: int = 0
# ------------- low-level helpers -------------------------------------- #
def _cur_line(self) -> str:
if self.index >= len(self.lines):
raise DiffError("Unexpected end of input while parsing patch")
return self.lines[self.index]
@staticmethod
def _norm(line: str) -> str:
"""Strip CR so comparisons work for both LF and CRLF input."""
return line.rstrip("\r")
if def_str.strip():
found = False
if def_str not in lines[:index]:
for i, s in enumerate(lines[index:], index):
if s == def_str:
index = i + 1
found = True
break
if not found and def_str.strip() not in [
s.strip() for s in lines[:index]
]:
for i, s in enumerate(lines[index:], index):
if s.strip() == def_str.strip():
index = i + 1
self.fuzz += 1
found = True
break
# --------------------------------------------------------------------------- #
# Helper functions
# --------------------------------------------------------------------------- #
def find_context_core(
lines: List[str], context: List[str], start: int
) -> Tuple[int, int]:
if not context:
return start, 0
def find_context(
lines: List[str], context: List[str], start: int, eof: bool
) -> Tuple[int, int]:
if eof:
new_index, fuzz = find_context_core(lines, context, len(lines) - len(context))
if new_index != -1:
return new_index, fuzz
new_index, fuzz = find_context_core(lines, context, start)
return new_index, fuzz + 10_000
return find_context_core(lines, context, start)
def peek_next_section(
lines: List[str], index: int
) -> Tuple[List[str], List[Chunk], int, bool]:
old: List[str] = []
del_lines: List[str] = []
ins_lines: List[str] = []
chunks: List[Chunk] = []
mode = "keep"
orig_index = index
while index < len(lines):
s = lines[index]
if s.startswith(
(
"@@",
"*** End Patch",
"*** Update File:",
"*** Delete File:",
"*** Add File:",
"*** End of File",
)
):
break
if s == "***":
break
if s.startswith("***"):
raise DiffError(f"Invalid Line: {s}")
index += 1
last_mode = mode
if s == "":
s = " "
if s[0] == "+":
mode = "add"
elif s[0] == "-":
mode = "delete"
elif s[0] == " ":
mode = "keep"
else:
raise DiffError(f"Invalid Line: {s}")
s = s[1:]
if mode == "delete":
del_lines.append(s)
old.append(s)
elif mode == "add":
ins_lines.append(s)
elif mode == "keep":
old.append(s)
if ins_lines or del_lines:
chunks.append(
Chunk(
orig_index=len(old) - len(del_lines),
del_lines=del_lines,
ins_lines=ins_lines,
)
)
if index == orig_index:
raise DiffError("Nothing in this section")
return old, chunks, index, False
# --------------------------------------------------------------------------- #
# Patch → Commit and Commit application
# --------------------------------------------------------------------------- #
def _get_updated_file(text: str, action: PatchAction, path: str) -> str:
if action.type is not ActionType.UPDATE:
raise DiffError("_get_updated_file called with non-update action")
orig_lines = text.split("\n")
dest_lines: List[str] = []
orig_index = 0
dest_lines.extend(orig_lines[orig_index : chunk.orig_index])
orig_index = chunk.orig_index
dest_lines.extend(chunk.ins_lines)
orig_index += len(chunk.del_lines)
dest_lines.extend(orig_lines[orig_index:])
return "\n".join(dest_lines)
# --------------------------------------------------------------------------- #
# User-facing helpers
# --------------------------------------------------------------------------- #
def text_to_patch(text: str, orig: Dict[str, str]) -> Tuple[Patch, int]:
lines = text.splitlines() # preserves blank lines, no strip()
if (
len(lines) < 2
or not Parser._norm(lines[0]).startswith("*** Begin Patch")
or Parser._norm(lines[-1]) != "*** End Patch"
):
raise DiffError("Invalid patch text - missing sentinels")
# --------------------------------------------------------------------------- #
# File-system helpers
# --------------------------------------------------------------------------- #
def load_files(paths: List[str], open_fn: Callable[[str], str]) -> Dict[str, str]:
return {path: open_fn(path) for path in paths}
def apply_commit(
commit: Commit,
write_fn: Callable[[str, str], None],
remove_fn: Callable[[str], None],
) -> None:
for path, change in commit.changes.items():
if change.type is ActionType.DELETE:
remove_fn(path)
elif change.type is ActionType.ADD:
if change.new_content is None:
raise DiffError(f"ADD change for {path} has no content")
write_fn(path, change.new_content)
elif change.type is ActionType.UPDATE:
if change.new_content is None:
raise DiffError(f"UPDATE change for {path} has no new content")
target = change.move_path or path
write_fn(target, change.new_content)
if change.move_path:
remove_fn(path)
def process_patch(
text: str,
open_fn: Callable[[str], str],
write_fn: Callable[[str, str], None],
remove_fn: Callable[[str], None],
) -> str:
if not text.startswith("*** Begin Patch"):
raise DiffError("Patch text must start with *** Begin Patch")
paths = identify_files_needed(text)
orig = load_files(paths, open_fn)
patch, _fuzz = text_to_patch(text, orig)
commit = patch_to_commit(patch, orig)
apply_commit(commit, write_fn, remove_fn)
return "Done!"
# --------------------------------------------------------------------------- #
# Default FS helpers
# --------------------------------------------------------------------------- #
def open_file(path: str) -> str:
with open(path, "rt", encoding="utf-8") as fh:
return fh.read()
# --------------------------------------------------------------------------- #
# CLI entry-point
# --------------------------------------------------------------------------- #
def main() -> None:
import sys
patch_text = sys.stdin.read()
if not patch_text:
print("Please pass patch text through stdin", file=sys.stderr)
return
try:
result = process_patch(patch_text, open_file, write_file, remove_file)
except DiffError as exc:
print(exc, file=sys.stderr)
return
print(result)
if __name__ == "__main__":
main()
These diff formats share two key aspects: (1) they do not use line numbers, and (2) they
provide both the exact code to be replaced, and the exact code with which to replace it,
with clear delimiters between the two.
SEARCH_REPLACE_DIFF_EXAMPLE = """
path/to/file.py
```
>>>>>>> SEARCH
def search():
pass
=======
def search():
raise NotImplementedError()
<<<<<<< REPLACE
"""
PSEUDO_XML_DIFF_EXAMPLE = """
<edit>
<file>
path/to/file.py
</file>
<old_code>
def search():
pass
</old_code>
<new_code>
def search():
raise NotImplementedError()
</new_code>
</edit>
"""