Related Roadmaps
Find the interactive version of
AI Engineer this roadmap and more at
AI and Data Scientist
MLOps Roadmap
Prompt Engineering roadmap.sh
AI Red Teaming
Prompt Engineering
LLMs and how they work?
Common Terminology Introduction What is a Prompt?
What is Prompt Engineering?
LLM Tokens
Context Window Models offered by ____
LLM Configuration
Hallucination Agents OpenAI Google
Prompt Injection Anthropic Meta
Model Weights / Parameters Sampling Parameters xAI
Fine-Tuning vs Prompt Engg. Temperature
AI vs AGI RAG Top-K Top-P
Output Control
Max Tokens
Stop Sequences
Prompting Techniques Structured Outputs Repetition Penalties
Frequency Penalty
Presence Penalty
Zero-Shot Prompting Step-back Prompting
One-Shot / Few-Shot Prompting
Chain of Thought (CoT) Prompting Automatic Prompt Engineering
System / Role / Contextual
Use LLM to generate Prompts
Self-Consistency Prompting
System Prompting
Role Prompting Tree of Thoughts (ToT) Prompting
Contextual Prompting ReAct Prompting
Prompting Best Practices AI Red Teaming
AI Red Teaming Roadmap
— Provide few-shot examples for structure or output style you need
— Keep your prompts short and concise
— Ask for structured output if it helps e.g. JSON, XML, Markdown, CSV etc
— Use variables / placeholders in your prompts for easier configuration
— Prioritize giving clearer instructions over adding constraints
— Control the maximum output length
Improving Reliability
— Experiment with input formats and writing styles
Prompt Debiasing
— Tune sampling (temperature, top-k, top-p) for determinism vs creativity
— Guard against prompt injection; sanitize user text Prompt Ensembling
— Automate evaluation; integrate unit tests for outputs
LLM Self Evaluation
— Document and track prompt versions
— Optimize for latency & cost in production pipelines Calibrating LLMs
— Document decisions, failures, and learnings for future devs
— Delimit different sections with triple backticks or XML tags
Continue learning with following roadmaps
AI Engineer Roadmap AI Agents Roadmap