This library provides composable filter and transformation utilities for UI message streams created by streamText() in the AI SDK.
The AI SDK UI message stream created by toUIMessageStream() streams all parts (text, tools, reasoning, etc.) to the client by default. However, you may want to:
- Filter: Tool calls like database searches often contain large amounts of data or sensitive information that should not be streamed to the client
- Transform: Modify text or tool outputs while they are streamed to the client
- Observe: Log stream lifecycle events, update states, or run side-effects without modifying the stream
This library provides type-safe, composable utilities for all these use cases.
This library supports AI SDK v5 and v6.
npm install ai-stream-utilsThe pipe function provides a composable pipeline API for filtering, transforming, and observing UI message streams. Multiple operators can be chained together, and type guards automatically narrow chunk and part types, thus enabling type-safe stream transformations with autocomplete.
Filter chunks by returning true to keep or false to exclude.
const stream = pipe(result.toUIMessageStream())
.filter(({ chunk, part }) => {
// chunk.type: "text-delta" | "text-start" | "tool-input-available" | ...
// part.type: "text" | "reasoning" | "tool-weather" | ...
if (chunk.type === "data-weather") {
return false; // exclude chunk
}
return true; // keep chunk
})
.toStream();Type guards provide a simpler API for common filtering patterns:
includeChunks("text-delta")orincludeChunks(["text-delta", "text-end"]): Include specific chunk typesexcludeChunks("text-delta")orexcludeChunks(["text-delta", "text-end"]): Exclude specific chunk typesincludeParts("text")orincludeParts(["text", "reasoning"]): Include specific part typesexcludeParts("reasoning")orexcludeParts(["reasoning", "tool-database"]): Exclude specific part types
Example: Exclude tool calls from the client.
const stream = pipe(result.toUIMessageStream())
.filter(excludeParts(["tool-weather", "tool-database"]))
.toStream();Transform chunks by returning a chunk, an array of chunks, or null to exclude.
const stream = pipe(result.toUIMessageStream())
.map(({ chunk, part }) => {
// chunk.type: "text-delta" | "text-start" | "tool-input-available" | ...
// part.type: "text" | "reasoning" | "tool-weather" | ...
if (chunk.type === "text-start") {
return chunk; // pass through unchanged
}
if (chunk.type === "text-delta") {
return { ...chunk, delta: "modified" }; // transform chunk
}
if (chunk.type === "data-weather") {
return [chunk1, chunk2]; // emit multiple chunks
}
return null; // exclude chunk (same as filter)
})
.toStream();Example: Convert text to uppercase.
const stream = pipe(result.toUIMessageStream())
.map(({ chunk }) => {
if (chunk.type === "text-delta") {
return { ...chunk, delta: chunk.delta.toUpperCase() };
}
return chunk;
})
.toStream();Observe chunks without modifying the stream. The callback is invoked for matching chunks.
const stream = pipe(result.toUIMessageStream())
.on(
({ chunk, part }) => {
// return true to invoke callback, false to skip
return chunk.type === "text-delta";
},
({ chunk, part }) => {
// callback invoked for matching chunks
console.log(chunk, part);
},
)
.toStream();Type guard provides a type-safe way to observe specific chunk types:
chunkType("text-delta")orchunkType(["start", "finish"]): Observe specific chunk typespartType("text")orpartType(["text", "reasoning"]): Observe chunks belonging to specific part types
Note
The partType type guard still operates on chunks. That means partType("text") will match any text chunks such as text-start, text-delta, and text-end.
Example: Log stream lifecycle events.
const stream = pipe(result.toUIMessageStream())
.on(chunkType("start"), ({ chunk }) => {
console.log("Stream started:", chunk.messageId);
})
.on(chunkType("finish"), ({ chunk }) => {
console.log("Stream finished:", chunk.finishReason);
})
.on(chunkType("tool-input-available"), ({ chunk }) => {
console.log("Tool input:", chunk.toolName, chunk.input);
})
.on(chunkType("tool-output-available"), ({ chunk }) => {
console.log("Tool output:", chunk.toolName, chunk.output);
})
.toStream();Convert the pipeline back to a AsyncIterableStream<InferUIMessageChunk<UI_MESSAGE>> that can be returned to the client or consumed.
const stream = pipe(result.toUIMessageStream())
.filter(({ chunk }) => {})
.map(({ chunk }) => {})
.toStream();
// Iterate with for-await-of
for await (const chunk of stream) {
console.log(chunk);
}
// Consume as ReadableStream
for await (const message of readUIMessageStream({ stream })) {
console.log(message);
}
// Return to client with useChat()
return stream;Multiple operators can be chained together. After filtering with type guards, chunk and part types are narrowed automatically.
const stream = pipe<MyUIMessage>(result.toUIMessageStream())
.filter(includeParts("text"))
.map(({ chunk, part }) => {
// chunk is narrowed to text chunks: "text-start" | "text-delta" | "text-end"
// part is narrowed to "text"
return chunk;
})
.toStream();Control chunks always pass through regardless of filter/transform settings:
start: Stream start markerfinish: Stream finish markerabort: Stream abort markermessage-metadata: Message metadata updateserror: Error messages
Helper functions for consuming streams and converting between streams, arrays, and async iterables.
Consumes a UI message stream by fully reading it and returns the final assembled message. Useful for server-side processing without streaming to the client.
import { consumeUIMessageStream } from "ai-stream-utils";
const result = streamText({
model: openai("gpt-4o"),
prompt: "Tell me a joke",
});
const message = await consumeUIMessageStream(result.toUIMessageStream<MyUIMessage>());
console.log(message.parts); // All parts fully assembledAdds async iterator protocol to a ReadableStream, enabling for await...of loops.
import { createAsyncIterableStream } from "ai-stream-utils";
const asyncStream = createAsyncIterableStream(readableStream);
for await (const chunk of asyncStream) {
console.log(chunk);
}Converts an array to a ReadableStream that emits each element.
import { convertArrayToStream } from "ai-stream-utils";
const stream = convertArrayToStream([1, 2, 3]);Converts an async iterable (e.g., async generator) to a ReadableStream.
import { convertAsyncIterableToStream } from "ai-stream-utils";
async function* generator() {
yield 1;
yield 2;
}
const stream = convertAsyncIterableToStream(generator());Collects all values from an async iterable into an array.
import { convertAsyncIterableToArray } from "ai-stream-utils";
const array = await convertAsyncIterableToArray(asyncIterable);Consumes a ReadableStream and collects all chunks into an array.
import { convertStreamToArray } from "ai-stream-utils";
const array = await convertStreamToArray(readableStream);Converts a UI message stream to an SSE (Server-Sent Events) stream. Useful for sending UI message chunks over HTTP as SSE-formatted text.
import { convertUIMessageToSSEStream } from "ai-stream-utils";
const uiStream = result.toUIMessageStream();
const sseStream = convertUIMessageToSSEStream(uiStream);
// Output format: "data: {...}\n\n" for each chunkConverts an SSE stream back to a UI message stream. Useful for parsing SSE-formatted responses on the client.
import { convertSSEToUIMessageStream } from "ai-stream-utils";
const response = await fetch("/api/chat");
const sseStream = response.body.pipeThrough(new TextDecoderStream());
const uiStream = convertSSEToUIMessageStream(sseStream);Warning
These functions are deprecated and will be removed in a future version. Use pipe() instead.
import { mapUIMessageStream } from "ai-stream-utils";
const stream = mapUIMessageStream(result.toUIMessageStream(), ({ chunk }) => {
if (chunk.type === "text-delta") {
return { ...chunk, delta: chunk.delta.toUpperCase() };
}
return chunk;
});import { filterUIMessageStream, includeParts } from "ai-stream-utils";
const stream = filterUIMessageStream(
result.toUIMessageStream(),
includeParts(["text", "tool-weather"]),
);import { flatMapUIMessageStream, partTypeIs } from "ai-stream-utils";
const stream = flatMapUIMessageStream(
result.toUIMessageStream(),
partTypeIs("tool-weather"),
({ part }) => {
if (part.state === "output-available") {
return {
...part,
output: { ...part.output, temperature: toFahrenheit(part.output.temperature) },
};
}
return part;
},
);The toUIMessageStream() from streamText() returns a generic ReadableStream<UIMessageChunk>, which means the part types cannot be inferred automatically.
To enable autocomplete and type-safety, pass your UIMessage type as a generic parameter:
import type { UIMessage, InferUITools } from "ai";
type MyUIMessageMetadata = {};
type MyDataPart = {};
type MyTools = InferUITools<typeof tools>;
type MyUIMessage = UIMessage<MyUIMessageMetadata, MyDataPart, MyTools>;
// Use MyUIMessage type when creating the UI message stream
const uiStream = result.toUIMessageStream<MyUIMessage>();
// Type-safe filtering with autocomplete
const stream = pipe<MyUIMessage>(uiStream)
.filter(includeParts(["text", "tool-weather"])) // Autocomplete works!
.map(({ chunk, part }) => {
// part.type is typed based on MyUIMessage
return chunk;
})
.toStream();The transformed stream has the same type as the original UI message stream. You can consume it with useChat() or readUIMessageStream().
Since message parts may be different on the client vs. the server, you may need to reconcile message parts when the client sends messages back to the server.
If you save messages to a database and configure useChat() to only send the last message, you can read existing messages from the database. This means the model will have access to all message parts, including filtered parts not available on the client.