Developers increasingly use AI assistants to discover tools and learn how to implement them. They ask ChatGPT for help debugging API errors, or prompt Claude for authentication solutions, or use agents in Cursor or Replit to pair-program with AI.
There was a time these questions would send developers to Google, then to blog posts and documentation. Now, changing developer behavior and search engine results force action, even without a definitive playbook for LLM content strategy. Do nothing, and your product becomes invisible to this growing discovery channel.
Leading developer tool companies like Vercel, Stripe, and Postman have already developed effective answer engine optimization (AEO) and generative engine optimization (GEO) strategies. We identified five practical patterns you can adapt to build a content strategy that works for both human developers and the AI systems they rely on for answers.
At the center of these strategies is a simple llms.txt Markdown file that helps LLMs understand and recommend your product. The real insight isn’t the format itself. It’s what companies choose to include in it. We researched 30+ of these files to learn what matters most. Approaches vary dramatically: explicit AI instructions, use-case libraries, or careful access controls.
We’ll start by explaining llms.txt and why it’s becoming a standard. Then, we’ll break down five distinct approaches companies use with this file, so you can get your product in front of AI systems.
Where AI Content Strategy Starts for Developer Tools
You might already be familiar with robots.txt files, which provide URL patterns that indexers are allowed or disallowed to access, and sitemap.xml files, which list every URL, in no particular order, to help search engines fully index your site.
But these pre-AI formats don’t solve the challenge that LLMs face when crawling your site: thousands of pages with no clear hierarchy or starting point. The llms.txt file solves this with curation and context.
The companies we reviewed use llms.txt to guide AI toward their most important documentation, getting started guides, and integration examples. Here’s a subset of GitHub’s llms.txt.
The format has seen rapid adoption, even as some SEO experts question whether LLMs use these files. A Reddit thread from June 2025 noted only 3,827 sites using llms.txt. By November that year, the number had jumped to over 1 million sites.

Even as we completed this research, more dev tool companies opted to include an llms.txt file. There is at least one llms.txt file for 75% of the dev tool companies we analyzed.
The llms.txt format isn’t a silver bullet for AI discovery. It won’t compensate for poor documentation or unclear product positioning. But it forces a valuable exercise — identifying what content matters most to the audience you want to find you. That curation process helps AI systems, developers, and your own team better understand your product.
So where do you start? With the problems your product solves, not the features it offers.
1. Frame Around Use Cases for Discoverability
Instead of “I need a tunneling solution,” developers are more likely to think “I need to test this webhook.” This is why use-case-driven content works: it matches how developers describe their needs. For llms.txt files, this principle becomes even more critical. When developers ask AI assistants for help, use-case framing helps your product enter the conversation.
While tunneling tool ngrok includes extensive internal product documentation for LLMs, it devotes an entire section to other companies’ products. More than 60 guides cover major platforms like GitHub, Heroku, and MongoDB. ngrok positions itself as the invisible infrastructure that enables these tasks — and a solution to common developer workflows.
The ngrok llms.txt shows they know how developers think. Their OAuth and webhook guides don’t mention ngrok in the titles. Instead, they focus on the task itself: authenticating users or testing webhooks locally.
Though not to the extent of ngrok, other companies take a similar approach. DigitalOcean, for example, organizes llms.txt content around common scenarios with Kubernetes, MySQL, and Kafka rather than solely around its product features.
Both companies understand a key insight: AI-assisted discovery happens when developers describe problems, not product names.
To apply this strategy, audit your documentation titles and llms.txt categories. Do they use your internal product terminology, or do they match how developers describe tasks? Replace “Authentication API” with “How to add OAuth to your app.” Replace “Database Management” with “How to optimize MySQL performance.”
When you frame around use cases, you enable discovery. But once an AI system finds your product, how do you guide what it recommends?
2. Give Explicit Instructions to LLMs
AI systems are powerful, but they’re not perfect. They can recommend outdated APIs, suggest deprecated features, or miss critical implementation nuances. Without guidance, an LLM might point developers toward the wrong solution — even if your documentation clearly explains the right approach.
Stripe has a track record of reducing friction by being exceptionally clear with developers. They apply the same philosophy to AI systems. Their two llms.txt files — one on the primary site and another for documentation — tell LLMs what’s important to know.
In fact, Stripe even calls out to the file in its docs footer, ensuring its discovery.

Opening up the file, Stripe’s instructions are precise.
Stripe structures their llms.txt strategically: resources first, then explicit instructions. After listing documentation and API references, they provide detailed guidance on everything from API selection to compliance scenarios to platform architecture.
For example, they warn against the deprecated Charges API and direct AI toward Payment Intents (including migration instructions). This level of guidance prevents AI from recommending patterns Stripe no longer supports.
Stripe was the only company we found that speaks directly to AI systems. But the broader lesson is about providing rich context, not just links.
Postmark demonstrates this through comprehensive documentation: they embed complete API reference sections with inline code examples, request formats, and error handling.
Both Stripe and Postmark provide key context, enabling LLMs to answer implementation questions without fetching external documentation. For your llms.txt file, identify what’s most critical for AI to understand about your product: which APIs or features to recommend, which approaches are deprecated, and what patterns lead to success. Then create a dedicated section that provides this context.
Providing rich context helps AI recommend your product correctly. But some companies take a different approach: controlling what AI can do with that content in the first place.
3. Explain What LLMs Can (and Can’t) Do
LLMs interact with your content in two primary ways: training on it to build their models, and accessing it directly when answering questions. Most companies prioritize maximum AI visibility and treat both modes the same. Others take a more nuanced approach, setting different rules for training versus direct access to balance discoverability with control.
Snowflake shows us how this works in practice. The company provides an llms.txt file for AI systems, but it includes restrictions. LLMs cannot train on their documentation. Any commercial use requires explicit licensing.
Snowflake created its llms.txt to communicate permissions, specifically, “how AI crawlers and researchers may use content from docs.snowflake.com.” Unlike other companies using the file to guide AI toward content, Snowflake uses it to set boundaries.
This protective stance likely serves multiple purposes. IP protection is one reason. Accuracy is another. If models train on outdated docs, they recommend deprecated features. Blocking training while allowing direct access ensures developers get current information. It’s a different path to Stripe’s goal of guiding AI toward correct implementations.
The only other example we found with training permissions came from Render. In contrast to Snowflake, Render gives LLMs broad access to its content. In a “permissions” section near the top, the llms.txt file allows training, summarization, and commercial use. It also requires attribution, something it has in common with Snowflake. Render maximizes LLM visibility while requiring attribution, offering a middle-ground approach between full restriction and unlimited use.
The difference between these approaches isn’t just philosophy — it’s clear communication. Snowflake blocks training but allows access. Render welcomes both but requires credit. Neither company leaves its stance ambiguous. For your llms.txt, add permissions clearly stating your requirements: what you allow, what you restrict, and what you expect in return.
Even with the right access rules, AI systems still need usable content. That brings us back to fundamentals.
4. Use Developer Experience Best Practices
What works for human developers also works for AI systems. Clear hierarchy, logical grouping of related features, and building from simple to complex aren’t just human-friendly practices. As we reviewed llms.txt from top dev tools, we saw that LLMs need these same things to understand and recommend your product effectively.
Developers often mention Vercel for its great developer experience. The company’s llms.txt applies these same principles. The file opens with Getting Started, the essentials every developer needs first.
Frameworks and tools come next. Detailed sections like API Reference and Knowledge Base appear later. Vercel gives the basics before advanced features, and concepts before technical details. It doesn’t try to cover every potential feature. Instead, Vercel’s llms.txt is a curated, prioritized summary offering quick help to any developer or agent reading the file.
Elastic takes a different approach to the same idea. The file organizes around the developer journey:
- Elastic fundamentals
- Solutions and use cases
- Manage data
- Explore and analyze
- Deploy and manage
- Troubleshoot
Elastic maps its content to how developers work.
Good developer experience principles are flexible, as long as you focus on your audience. Whether a human or an agent, the end evaluator is a developer with a problem to solve. For your llms.txt, include what they need: clear starting points, prioritized information, and intentional organization.
These principles help developers and AI systems navigate your content. But first, you need to decide what content to include.
5. Curate Your Content with Categories
Curation involves answering two questions: what to include and how to organize it. The companies with the most effective files prioritize essential content and use categories that developers understand. Your product likely has hundreds or thousands of content pages. You can’t include everything in your llms.txt, though some may try.
Some companies skip curation entirely. Firebase and Algolia each provide 5,000–6,000 lines of uncategorized links with no apparent prioritization. MongoDB, on the other hand, includes hundreds of sub-headings across its 25,000 lines. Unfortunately, those categories are simply product names and version numbers. These companies have treated their llms.txt as a sitemap, assigning equal weight to everything. Their products are less discoverable because they’ve left LLMs to decide what matters most.
Netlify shows what’s possible in only a 120-line llms.txt file, which starts like this.
That brevity isn’t accidental. Netlify includes only what’s most important, not every feature or API endpoint. Then, they organize categories to match developer language. There’s a section for “Data & Storage,” not the branded name “Netlify Connect.”
For your llms.txt, include what developers need to get started and solve common problems. Then organize using categories that make sense in developer language, not internal product names or version numbers.
The pattern across successful llms.txt files is clear: they’re strategic, curated, and built with developers in mind. Yours can be too.
Make Your Developer Product LLM-Friendly
Leading developer tool companies use five strategies to make their products discoverable to AI systems: use-case framing, rich context, clear permissions, DX structure, and thoughtful curation. These are research-backed patterns that you can implement today.
You’re not starting from scratch. These strategies share a foundation: understanding your developer audience. You need to know which use cases matter, what context developers need, how they prefer to learn, and what information helps versus overwhelms.
This knowledge should drive every decision in your content strategy, whether it’s llms.txt, documentation, or thought leadership. Just one of these patterns can help you reach developers through AI assistants more effectively.
Of course, implementation takes work — and that’s where EveryDeveloper can offer support. We specialize in understanding developer audiences and translating that understanding into effective content strategy. We can help you implement these llms.txt strategies, audit your existing approach, or work with your team to better serve both your developers and the AI systems helping them.
Let’s partner on your LLM content strategy and make your product more discoverable to AI systems. See how to work with us.
