A community provider for SAP AI Core that integrates seamlessly with the Vercel AI SDK. Built on top of the official @sap-ai-sdk/orchestration and @sap-ai-sdk/foundation-models packages, this provider enables you to use SAP's enterprise-grade AI models through the familiar Vercel AI SDK interface.
- Features
- Quick Start
- Quick Reference
- Installation
- Provider Creation
- Authentication
- Basic Usage
- Supported Models
- Advanced Features
- Configuration Options
- Error Handling
- Troubleshooting
- Performance
- Security
- Debug Mode
- Examples
- Migration Guides
- Important Note
- Contributing
- Resources
- License
- 🔐 Simplified Authentication - Uses SAP AI SDK's built-in credential handling
- 🎯 Tool Calling Support - Full tool/function calling capabilities
- 🧠 Reasoning-Safe by Default - Assistant reasoning parts are not forwarded unless enabled
- 🖼️ Multi-modal Input - Support for text and image inputs
- 📡 Streaming Support - Real-time text generation with structured V3 blocks
- 🔒 Data Masking - Built-in SAP DPI integration for privacy
- 🛡️ Content Filtering - Azure Content Safety and Llama Guard support
- 🔧 TypeScript Support - Full type safety and IntelliSense
- 🎨 Multiple Models - Support for OpenAI, Claude, Gemini, Nova, and more
- ⚡ Language Model V3 - Latest Vercel AI SDK specification with enhanced streaming
- 📊 Text Embeddings - Generate vector embeddings for RAG and semantic search
- 🔀 Dual API Support - Choose between Orchestration or Foundation Models API per provider, model, or call
- 📦 Stored Configuration Support - Reference orchestration configurations or prompt templates from SAP AI Core
npm install @jerome-benoit/sap-ai-provider aiimport "dotenv/config"; // Load environment variables
import { createSAPAIProvider } from "@jerome-benoit/sap-ai-provider";
import { generateText } from "ai";
import { APICallError } from "@ai-sdk/provider";
// Create provider (authentication via AICORE_SERVICE_KEY env var)
const provider = createSAPAIProvider();
try {
// Generate text with gpt-4.1
const result = await generateText({
model: provider("gpt-4.1"),
prompt: "Explain quantum computing in simple terms.",
});
console.log(result.text);
} catch (error) {
if (error instanceof APICallError) {
console.error("SAP AI Core API error:", error.message);
console.error("Status:", error.statusCode);
} else {
console.error("Unexpected error:", error);
}
}Note: Requires
AICORE_SERVICE_KEYenvironment variable. See Environment Setup for configuration.
| Task | Code Pattern | Documentation |
|---|---|---|
| Install | npm install @jerome-benoit/sap-ai-provider ai |
Installation |
| Auth Setup | Add AICORE_SERVICE_KEY to .env |
Environment Setup |
| Create Provider | createSAPAIProvider() or use sapai |
Provider Creation |
| Text Generation | generateText({ model: provider("gpt-4.1"), prompt }) |
Basic Usage |
| Streaming | streamText({ model: provider("gpt-4.1"), prompt }) |
Streaming |
| Tool Calling | generateText({ tools: { myTool: tool({...}) } }) |
Tool Calling |
| Error Handling | catch (error instanceof APICallError) |
API Reference |
| Choose Model | See 80+ models (GPT, Claude, Gemini, Llama) | Models |
| Embeddings | embed({ model: provider.embedding("text-embedding-3-small") }) |
Embeddings |
Requirements: Node.js 20+ and Vercel AI SDK 5.0+ (6.0+ recommended)
npm install @jerome-benoit/sap-ai-provider aiOr with other package managers:
# Yarn
yarn add @jerome-benoit/sap-ai-provider ai
# pnpm
pnpm add @jerome-benoit/sap-ai-provider aiV2 Facade Package Available: For users requiring
LanguageModelV2/EmbeddingModelV2interfaces, install the dedicated V2 facade package:npm install @jerome-benoit/sap-ai-provider-v2 aiThis package provides a V2-compatible facade over the internal V3 implementation.
Basic Usage Example:
import { createSAPAIProvider } from "@jerome-benoit/sap-ai-provider-v2"; import { generateText } from "ai"; const provider = createSAPAIProvider(); const result = await generateText({ model: provider("gpt-4.1"), prompt: "Hello V2!", }); console.log(result.text);For a detailed understanding of the dual-package architecture, refer to Architecture - Dual-Package.
You can create an SAP AI provider in two ways:
import "dotenv/config"; // Load environment variables
import { createSAPAIProvider } from "@jerome-benoit/sap-ai-provider";
const provider = createSAPAIProvider({
resourceGroup: "production",
deploymentId: "your-deployment-id", // Optional
});The provider supports two SAP AI Core APIs:
- Orchestration API (default): Full-featured API with data masking, content filtering, document grounding, and translation
- Foundation Models API: Direct model access with additional parameters like
logprobs,seed,logit_bias, anddataSources(Azure On Your Data)
Complete example:
examples/example-foundation-models.ts
Complete documentation:
API Reference - Foundation Models API
import { createSAPAIProvider, SAP_AI_PROVIDER_NAME } from "@jerome-benoit/sap-ai-provider";
// Provider-level API selection
const provider = createSAPAIProvider({
api: "foundation-models", // All models use Foundation Models API
});
// Model-level API override
const model = provider("gpt-4.1", {
api: "orchestration", // Override for this model only
});
// Per-call API override via providerOptions
const result = await generateText({
model: provider("gpt-4.1"),
prompt: "Hello",
providerOptions: {
[SAP_AI_PROVIDER_NAME]: {
api: "foundation-models", // Override for this call only
},
},
});Run it: npx tsx examples/example-foundation-models.ts
Note: The Foundation Models API does not support orchestration features (masking, filtering, grounding, translation). Attempting to use these features with Foundation Models API will throw an
UnsupportedFeatureError.
import "dotenv/config"; // Load environment variables
import { sapai } from "@jerome-benoit/sap-ai-provider";
import { generateText } from "ai";
// Use directly with auto-detected configuration
const result = await generateText({
model: sapai("gpt-4.1"),
prompt: "Hello!",
});The sapai export provides a convenient default provider instance with
automatic configuration from environment variables or service bindings.
The provider is callable and also exposes explicit methods:
// Callable syntax (creates language model)
const chatModel = provider("gpt-4.1");
// Explicit method syntax
const chatModel = provider.chat("gpt-4.1");
const embeddingModel = provider.embedding("text-embedding-3-small");Available methods:
| Method | Description |
|---|---|
provider(modelId) |
Callable syntax, creates language model |
provider.chat(modelId) |
Creates language model (alias) |
provider.languageModel(modelId) |
Creates language model (ProviderV3 standard) |
provider.embedding(modelId) |
Creates embedding model (alias) |
provider.embeddingModel(modelId) |
Creates embedding model (ProviderV3 standard) |
provider.textEmbeddingModel(modelId) |
Creates embedding model (alias) |
embedding()andembeddingModel()are identical.textEmbeddingModel()is deprecated in the V3 package — useembeddingModel()instead.Note: The V2 facade package (
@jerome-benoit/sap-ai-provider-v2) only exposestextEmbeddingModel()for embeddings per theProviderV2specification. Use the V3 package if you needembedding()orembeddingModel()aliases.
Authentication is handled automatically by the SAP AI SDK via the
AICORE_SERVICE_KEY environment variable (local) or VCAP_SERVICES (SAP BTP).
→ Environment Setup Guide - Complete setup instructions, SAP BTP deployment, and troubleshooting.
Complete example: examples/example-generate-text.ts
const result = await generateText({
model: provider("gpt-4.1"),
prompt: "Write a short story about a robot learning to paint.",
});
console.log(result.text);Run it: npx tsx examples/example-generate-text.ts
Complete example: examples/example-simple-chat-completion.ts
Note: Assistant
reasoningparts are dropped by default. SetincludeReasoning: trueon the model settings if you explicitly want to forward them.
const result = await generateText({
model: provider("anthropic--claude-4.5-sonnet"),
messages: [
{ role: "system", content: "You are a helpful coding assistant." },
{
role: "user",
content: "How do I implement binary search in TypeScript?",
},
],
});Run it: npx tsx examples/example-simple-chat-completion.ts
Complete example: examples/example-streaming-chat.ts
import { streamText } from "ai";
import { APICallError } from "@ai-sdk/provider";
try {
const result = streamText({
model: provider("gpt-4.1"),
prompt: "Explain machine learning concepts.",
});
for await (const delta of result.textStream) {
process.stdout.write(delta);
}
// Await final result to catch any errors that occurred during streaming
const finalResult = await result;
console.log("\n\nUsage:", finalResult.usage);
} catch (error) {
if (error instanceof APICallError) {
console.error("API Error:", error.message);
// See Error Handling section for complete error type reference
}
throw error;
}Run it: npx tsx examples/example-streaming-chat.ts
Note: For comprehensive error handling patterns, see the Error Handling section and API Reference - Error Types.
import "dotenv/config"; // Load environment variables
import { createSAPAIProvider } from "@jerome-benoit/sap-ai-provider";
import { generateText } from "ai";
const provider = createSAPAIProvider();
const model = provider("gpt-4.1", {
// Optional: include assistant reasoning parts (chain-of-thought).
// Best practice is to keep this disabled.
includeReasoning: false,
modelParams: {
temperature: 0.3,
maxTokens: 2000,
topP: 0.9,
},
});
const result = await generateText({
model,
prompt: "Write a technical blog post about TypeScript.",
});Generate vector embeddings for RAG (Retrieval-Augmented Generation), semantic search, and similarity matching.
Complete example: examples/example-embeddings.ts
import "dotenv/config"; // Load environment variables
import { createSAPAIProvider } from "@jerome-benoit/sap-ai-provider";
import { embed, embedMany } from "ai";
const provider = createSAPAIProvider();
// Single embedding
const { embedding } = await embed({
model: provider.embedding("text-embedding-3-small"),
value: "What is machine learning?",
});
// Multiple embeddings
const { embeddings } = await embedMany({
model: provider.embedding("text-embedding-3-small"),
values: ["Hello world", "AI is amazing", "Vector search"],
});Run it: npx tsx examples/example-embeddings.ts
Note: Embedding model availability depends on your SAP AI Core tenant configuration. Common providers include OpenAI, Amazon Titan, and NVIDIA.
For complete embedding API documentation, see API Reference: Embeddings.
This provider supports all models available through SAP AI Core, including models from OpenAI, Anthropic Claude, Google Gemini, Amazon Nova, Mistral AI, Cohere, and SAP (ABAP, RPT).
Note: Model availability depends on your SAP AI Core tenant configuration, region, and subscription. Use
provider("model-name")with any model ID available in your environment.
For details on discovering available models, see API Reference: Supported Models.
The following helper functions are exported by this package for convenient configuration of SAP AI Core features. These builders provide type-safe configuration for data masking, content filtering, grounding, and translation modules.
Note on Terminology: This documentation uses "tool calling" (Vercel AI SDK convention), equivalent to "function calling" in OpenAI documentation. Both terms refer to the same capability of models invoking external functions.
📖 Complete guide:
API Reference - Tool Calling
Complete example:
examples/example-chat-completion-tool.ts
import { generateText, tool } from "ai";
import { z } from "zod";
import { createSAPAIProvider } from "@jerome-benoit/sap-ai-provider";
const provider = createSAPAIProvider();
const weatherTool = tool({
description: "Get weather for a location",
parameters: z.object({ location: z.string() }),
execute: async (args) => `Weather in ${args.location}: sunny, 72°F`,
});
const result = await generateText({
model: provider("gpt-4.1"),
prompt: "What's the weather in Tokyo?",
tools: { getWeather: weatherTool },
maxSteps: 3,
});Run it: npx tsx examples/example-chat-completion-tool.ts
Complete example: examples/example-image-recognition.ts
const result = await generateText({
model: provider("gpt-4.1"),
messages: [
{
role: "user",
content: [
{ type: "text", text: "What do you see in this image?" },
{ type: "image", image: new URL("https://example.com/image.jpg") },
],
},
],
});Run it: npx tsx examples/example-image-recognition.ts
Use SAP's Data Privacy Integration to mask sensitive data:
Complete example:
examples/example-data-masking.ts
Complete documentation:
API Reference - Data Masking
import { buildDpiMaskingProvider } from "@jerome-benoit/sap-ai-provider";
const dpiConfig = buildDpiMaskingProvider({
method: "anonymization",
entities: ["profile-email", "profile-person", "profile-phone"],
});Run it: npx tsx examples/example-data-masking.ts
import "dotenv/config"; // Load environment variables
import { buildAzureContentSafetyFilter, createSAPAIProvider } from "@jerome-benoit/sap-ai-provider";
const provider = createSAPAIProvider({
defaultSettings: {
filtering: {
input: {
filters: [
buildAzureContentSafetyFilter("input", {
hate: "ALLOW_SAFE",
violence: "ALLOW_SAFE_LOW_MEDIUM",
}),
],
},
},
},
});Complete documentation: API Reference - Content Filtering
Ground LLM responses in your own documents using vector databases.
Complete example:
examples/example-document-grounding.ts
Complete documentation:
API Reference - Document Grounding
const provider = createSAPAIProvider({
defaultSettings: {
grounding: buildDocumentGroundingConfig({
filters: [
{
id: "vector-store-1", // Your vector database ID
data_repositories: ["*"], // Search all repositories
},
],
placeholders: {
input: ["?question"],
output: "groundingOutput",
},
}),
},
});
// Queries are now grounded in your documents
const model = provider("gpt-4.1");Run it: npx tsx examples/example-document-grounding.ts
Automatically translate user queries and model responses.
Complete example:
examples/example-translation.ts
Complete documentation:
API Reference - Translation
const provider = createSAPAIProvider({
defaultSettings: {
translation: {
// Translate user input from German to English
input: buildTranslationConfig("input", {
sourceLanguage: "de",
targetLanguage: "en",
}),
// Translate model output from English to German
output: buildTranslationConfig("output", {
targetLanguage: "de",
}),
},
},
});
// Model handles German input/output automatically
const model = provider("gpt-4.1");Run it: npx tsx examples/example-translation.ts
Override constructor settings on a per-call basis using providerOptions.
Options are validated at runtime with Zod schemas.
import { generateText } from "ai";
import { createSAPAIProvider, SAP_AI_PROVIDER_NAME } from "@jerome-benoit/sap-ai-provider";
const provider = createSAPAIProvider();
const result = await generateText({
model: provider("gpt-4.1"),
prompt: "Explain quantum computing",
providerOptions: {
[SAP_AI_PROVIDER_NAME]: {
includeReasoning: true,
modelParams: {
temperature: 0.7,
maxTokens: 1000,
},
},
},
});Complete documentation: API Reference - Provider Options
The provider and models can be configured with various settings for authentication, model parameters, data masking, content filtering, and more.
Common Configuration:
name: Provider name (default:'sap-ai'). Used as key inproviderOptions/providerMetadata.resourceGroup: SAP AI Core resource group (default: 'default')deploymentId: Specific deployment ID (auto-resolved if not set)modelParams: Temperature, maxTokens, topP, and other generation parametersmasking: SAP Data Privacy Integration (DPI) configurationfiltering: Content safety filters (Azure Content Safety, Llama Guard)
For complete configuration reference including all available options, types, and examples, see API Reference - Configuration.
The provider uses standard Vercel AI SDK error types (APICallError,
LoadAPIKeyError, NoSuchModelError from @ai-sdk/provider) for consistent
error handling across providers.
Documentation:
- API Reference - Error Handling - Complete examples, error types, and SAP-specific metadata
- Troubleshooting Guide - Solutions for common errors (401, 404, 429, 5xx)
Quick Reference:
- Authentication (401): Check
AICORE_SERVICE_KEYorVCAP_SERVICES - Model not found (404): Confirm tenant/region supports the model ID
- Rate limit (429): Automatic retry with exponential backoff
- Streaming: Iterate
textStreamcorrectly; don't mixgenerateTextandstreamText
For detailed solutions, see Troubleshooting Guide covering authentication, model discovery, rate limiting, server errors, streaming, and tool calling.
Error codes: API Reference - HTTP Status Codes
- Prefer streaming (
streamText) for long outputs to reduce latency and memory. - Tune
modelParamscarefully: lowertemperaturefor deterministic results; setmaxTokensto expected response size. - Use
defaultSettingsat provider creation for shared knobs across models to avoid per-call overhead. - Avoid unnecessary history: keep
messagesconcise to reduce prompt size and cost.
Follow security best practices when handling credentials. See Environment Setup - Security Best Practices for detailed guidance on credential management, key rotation, and secure deployment.
- Use the curl guide
CURL_API_TESTING_GUIDE.mdto diagnose raw API behavior independent of the SDK. - Log request IDs from
error.responseBody(parse JSON forrequest_id) to correlate with backend traces. - Temporarily enable verbose logging in your app around provider calls; redact secrets.
The examples/ directory contains complete, runnable examples demonstrating key
features:
| Example | Description | Key Features |
|---|---|---|
example-generate-text.ts |
Basic text generation | Simple prompts, synchronous generation |
example-simple-chat-completion.ts |
Simple chat conversation | System messages, user prompts |
example-chat-completion-tool.ts |
Tool calling with functions | Weather API tool, function execution |
example-streaming-chat.ts |
Streaming responses | Real-time text generation, SSE |
example-image-recognition.ts |
Multi-modal with images | Vision models, image analysis |
example-data-masking.ts |
Data privacy integration | DPI masking, anonymization |
example-document-grounding.ts |
Document grounding (RAG) | Vector store, retrieval-augmented gen |
example-translation.ts |
Input/output translation | Multi-language support, SAP translation |
example-embeddings.ts |
Text embeddings | Vector generation, semantic similarity |
example-foundation-models.ts |
Foundation Models API | Direct model access, logprobs, seed |
Running Examples:
npx tsx examples/example-generate-text.tsNote: Examples require
AICORE_SERVICE_KEYenvironment variable. See Environment Setup for configuration.
Version 4.0 migrates from LanguageModelV2 to LanguageModelV3 specification (AI SDK 5.0+). See the Migration Guide for complete upgrade instructions.
Key changes:
- Finish Reason: Changed from string to object
(
result.finishReason.unified) - Usage Structure: Nested format with detailed token breakdown
(
result.usage.inputTokens.total) - Stream Events: Structured blocks (
text-start,text-delta,text-end) instead of simple deltas - Warning Types: Updated format with
featurefield for categorization
Impact by user type:
- High-level API users (
generateText/streamText): ✅ Minimal impact (likely no changes) - Direct provider users:
⚠️ Update type imports (LanguageModelV2→LanguageModelV3) - Custom stream parsers:
⚠️ Update parsing logic for V3 structure
Version 3.0 standardizes error handling to use Vercel AI SDK native error types. See the Migration Guide for complete upgrade instructions.
Key changes:
SAPAIErrorremoved → UseAPICallErrorfrom@ai-sdk/provider- Error properties:
error.code→error.statusCode - Automatic retries for rate limits (429) and server errors (5xx)
Version 2.0 uses the official SAP AI SDK. See the Migration Guide for complete upgrade instructions.
Key changes:
- Authentication via
AICORE_SERVICE_KEYenvironment variable - Synchronous provider creation:
createSAPAIProvider()(no await) - Helper functions from SAP AI SDK
For detailed migration instructions with code examples, see the complete Migration Guide.
Third-Party Provider: This SAP AI Provider (
@jerome-benoit/sap-ai-provider) is developed and maintained by jerome-benoit, not by SAP SE. While it uses the official SAP AI SDK and integrates with SAP AI Core services, it is not an official SAP product.
We welcome contributions! Please see our Contributing Guide for details.
- Migration Guide - Version upgrade instructions (v1.x → v2.x → v3.x → v4.x)
- API Reference - Complete API documentation with all types and functions
- Environment Setup - Authentication and configuration setup
- Troubleshooting - Common issues and solutions
- Architecture - Internal architecture, design decisions, and request flows
- cURL API Testing Guide - Direct API testing for debugging
- 🐛 Issue Tracker - Report bugs, request features, and ask questions
- Vercel AI SDK - The AI SDK this provider extends
- SAP AI SDK - Official SAP Cloud SDK for AI
- SAP AI Core Documentation - Official SAP AI Core docs
Apache License 2.0 - see LICENSE for details.