Skip to main content

Hyperliquid Data Streams

Updated on
Feb 25, 2026

Overview

Hyperliquid provides several data streams that contain different types of blockchain and exchange events. These streams are available through both gRPC Streaming API and JSON-RPC/WebSocket APIs, allowing you to access real-time and historical data.

Available Data Streams

Stream TypeDescriptionAPI AvailabilityPrimary Use Case
TRADESAll executed trades with price, size, and directiongRPC + JSON-RPC/WSSTrading analytics, price tracking
ORDERSOrder lifecycle events (18+ status types including fills, cancellations)gRPC + JSON-RPC/WSSOrder management, execution monitoring
BOOK_UPDATESOrder book changes with bid/ask prices and quantitiesgRPC + JSON-RPC/WSSMarket depth analysis, liquidity monitoring
TWAPTWAP execution data and algorithm progressgRPC + JSON-RPC/WSSAlgorithmic trading, execution analytics
EVENTSBalance changes, transfers, deposits, withdrawals, vault operationsgRPC + JSON-RPC/WSSFund flow analysis, account monitoring
WRITER_ACTIONSHyperCore ↔ HyperEVM asset transfers and bridge datagRPC + JSON-RPC/WSSCross-environment asset tracking, DeFi analytics
BLOCKSRaw HyperCore blockchain data exposing replica_cmds (all transaction types)gRPC onlyBlockchain analysis, data archiving
StreamL2BookAggregated price levels — total size and order count per price, full snapshot every blockgRPC onlyMarket depth, spread monitoring, analytics dashboards
StreamL4BookIndividual order granularity — every resting order with user, oid, size, triggers, timestampsgRPC onlyHFT, quant trading, MEV, microstructure analysis

Stream Filtering

All Hyperliquid data streams support powerful filtering capabilities to help you receive only the data you need. Without filtering, streaming can generate massive amounts of data. Filters allow you to focus on specific trading pairs, users, event types, and more.

Key Benefits:

  • Reduced bandwidth - Receive only relevant data instead of processing everything
  • Focused applications - Build targeted apps without overwhelming data streams
  • Real-time efficiency - Process specific events faster for alerts and decision making

Example filters (WebSocket):

{
"streamType": "trades",
"filters": {
"coin": ["BTC", "ETH"], // Only BTC and ETH trades
"side": ["B"], // Only buy orders
"user": ["0x123..."] // Only specific user's trades
}
}

Complete Filtering Guide - Detailed documentation with syntax, examples, and field references for all stream types.

Stream Structure

All data streams follow a consistent structure:

{
"local_time": "2025-12-04T17:52:45.734593237",
"block_time": "2025-12-04T17:52:45.554315846",
"block_number": 817863403,
"events": [
// Array of events specific to the stream type
]
}

Common Fields

FieldTypeDescription
local_timestringLocal server timestamp in ISO 8601 format
block_timestringBlockchain block timestamp in ISO 8601 format
block_numberintegerSequential block number on Hyperliquid chain
eventsarrayArray of events specific to the stream type

Data Stream Details

Each data stream has its own unique event structure and use cases:


  • Stream Filtering - Complete guide to filtering all stream types with syntax and examples
  • Trades - Executed trade data with maker/taker information
  • Orders - Complete order lifecycle tracking with 18+ status types
  • Book Updates - Real-time order book changes for market depth analysis
  • TWAP - Time-weighted average price execution tracking
  • Events - Balance changes, transfers, deposits, withdrawals, and vault operations
  • Writer Actions - HyperCore ↔ HyperEVM asset transfers and bridge data
  • Blocks - Raw blockchain data (gRPC only)
  • StreamL2Book - Aggregated price-level depth, full snapshot every block (gRPC only)
  • StreamL4Book - Individual order granularity with snapshot + diffs (gRPC only)

Access Methods

gRPC Streaming API

All streams are available via the gRPC Streaming API for high-performance, real-time data access:

// Subscribe to trades stream
subscribe: {
stream_type: 'TRADES',
filters: {
"coin": {"values": ["BTC", "ETH"]},
"user": {"values": ["0x123..."]}
}
}

JSON-RPC API

Historical data access via JSON-RPC methods:

# Get latest blocks
curl -X POST https://your-endpoint.hype-mainnet.quiknode.pro/your-token/hypercore \
-d '{"method": "hl_getLatestBlocks", "params": {"stream": "trades", "count": 10}}'

# Get specific block
curl -X POST https://your-endpoint.hype-mainnet.quiknode.pro/your-token/hypercore \
-d '{"method": "hl_getBlock", "params": ["trades", 817863403]}'

WebSocket API

Real-time subscriptions via WebSocket:

// Subscribe to trades
ws.send(JSON.stringify({
"method": "hl_subscribe",
"params": {"streamType": "trades"}
}));

// Subscribe to orders
ws.send(JSON.stringify({
"method": "hl_subscribe",
"params": {"streamType": "orders"}
}));

Frequently Asked Questions

Which plan should I use for streaming all datasets with gRPC or WSS?
For users planning to stream all Hypercore datasets using gRPC or WSS with no filtering, we recommend the Scale plan or higher to account for the expected throughput.

Data volume varies based on onchain activity, and streaming all datasets without filters can generate significant data transfer. The Scale plan provides sufficient API credits to handle this usage pattern.
How much data does each stream produce per hour?
Based on live testing, here are the estimated decompressed data rates per hour for each stream:

High-volume streams:
BLOCKS: ~9.14 GB/hour (152 MB/min)
ORDERS: ~4.11 GB/hour (68 MB/min)
BOOK_UPDATES: ~3.15 GB/hour (53 MB/min)

Medium-volume streams:
TRADES: ~130 MB/hour (2.16 MB/min)

Low-volume streams:
EVENTS: ~6.8 MB/hour (0.11 MB/min)
WRITER_ACTIONS: ~5.8 MB/hour (0.10 MB/min)
TWAP: ~5.3 MB/hour (0.09 MB/min)

Important: These are estimates based on a 10-minute testing period and extrapolated to hourly rates. Actual data volume will vary significantly based on network traffic. During periods of high market activity, trading volume, or increased onchain transactions, these rates can increase substantially. Plan your infrastructure and API credit usage accordingly.
Does Hyperliquid support data compression?
Yes! We support and encourage using zstd compression for gRPC streaming.

Key points:
• Compression significantly improves data transfer speeds
• Compression does not affect billing - you're billed on uncompressed data volume
• Example implementations available at hypercore-grpc-examples

Using compression is highly recommended for optimal performance, especially when streaming multiple datasets.
How many addresses can I filter?
gRPC/WebSocket concurrent streams and filters by plan:
Build: 1 stream with up to 5 filters each
Accelerate: 5 streams with up to 10 filters each
Scale: 10 streams with up to 25 filters each
Business: 25 streams with up to 50 filters each

For larger address lists:
Use our KV Store + Streams product for handling extensive address filtering.

See the complete Filtering Guide for detailed filter syntax and examples.
Does gRPC support historical data queries?
No, gRPC is a streaming-only service designed for real-time data. It does not support querying historical blocks.

For historical/backfill data, use JSON-RPC methods:
hl_getLatestBlocks - Fetch up to 200 recent blocks
hl_getBatchBlocks - Fetch up to 200 specific blocks by number

If your gRPC connection breaks:
You can use these JSON-RPC methods to query historical blocks in batches for backfilling, then resume your gRPC stream from the current block.
Where can I find gRPC streaming examples with compression?
We provide complete working examples for Hyperliquid gRPC streaming with zstd compression support in our official GitHub repository:

github.com/quiknode-labs/hypercore-grpc-examples

What's included:
• Ready-to-run code examples in multiple languages
• zstd compression implementation
• Stream filtering examples
• Complete setup instructions

Simply clone the repository, install dependencies, plug in your Quicknode endpoint, and run. These examples demonstrate best practices for optimal performance when streaming Hyperliquid data.
Share this doc