from the winkJS open-source ecosystem
Turn streaming data into actionable insights — from edge to cloud.
A high-performance JavaScript framework for IIoT — signal conditioning, anomaly detection, and health assessment underpinned by neural-network intelligence.
See it live — in your browser
The same Composer core that powers edge to cloud deployments, running right here on published research datasets.
Scrub through time and watch insights surface as each message flows through the pipeline.
Industrial Manufacturing
A bearing degraded over 7 days in a NASA test rig. A winkComposer flow tracked vibration energy, spotted a distribution shift, and flagged the problem — days before the bearing failed.
Dive In→The building blocks
Diverse industries. Distinct challenges. One family of focused, composable building blocks — connected through one declarative flow language.
Express what you want, not how to build it.
The full picture
Data in. Intelligence out. A complete streaming pipeline that transforms raw signals into contextual, ready-to-use insights.
Connect
MQTT brokers, OPC-UA servers, or CSV replay — your existing infrastructure.
Process
Your pipeline of composable building blocks — smoothing, detection, assessment.
Store
QuestDB time-series historian — every insight stored with full semantics.
Act
Grafana dashboards, MQTT alerts, AI queries via MCP — insight becomes action.
Open-source dashboards
Composer computes. QuestDB stores. MQTT alerts in real time. Grafana shows — health scores, evidence, wash cycles, anomalies. The same intelligence your pipeline produces, in dashboards your team already knows.

winkComposer · QuestDB · Mosquitto · Grafana — open source, end to end.
Industrial intelligence, within everyone's reach.
Dashboards answer what's happening. When you need the why — you ask.
AI-native
Check sensor health and wash cycle state for Pump #42. Give me a visual summary I can share with the supervisor.

Actual output from Claude Opus querying a live winkComposer pipeline via MCP
By the time anyone asks, the answer already exists.
The pipeline never stops
Anomalies surfaced. Trends quantified. Health assessed — continuously, for every asset, edge to cloud. Insights exist before anyone looks.
Numbers carry meaning
Type, unit, limits, asset context — the semantics layer makes every computed value self-describing. Define once, inherit everywhere.
AI reasons over facts
Any LLM retrieves pre-computed results via Model Context Protocol — then reasons over them. Health reports, root cause analysis, trend explanations — grounded in real data.
Benchmarked
Same 8-node pipeline, benchmarked end-to-end — from a Raspberry Pi to a production server. Every asset gets its own isolated state. Failures never cross boundaries.
~100K
msg/sec
Raspberry Pi 5
1M+
msg/sec
Modern Server
~300K
msg/sec
tracking 200K assets
Same engine powers every recipe and use case on this site — in your browser.
Production-grade
A bad sensor doesn't take down the system. Each asset runs in its own isolated state.
Network goes down? No data lost. Messages queue locally and drain on reconnect.
Write once, deploy anywhere. Same pipeline code on a Raspberry Pi, a gateway, or a cloud server.
One flow, many equipment types. Route and specialise without duplicating pipelines.
Edge to cloud in layers. Local flows digest at the source. Aggregate flows combine across assets.
Clean shutdown, clean restart. Ordered teardown — sources, emitters, storage. No data corruption.
winkComposer is in active development and moving to open source. Follow the journey — new capabilities, releases, and milestones as they land.
Or subscribe for email updates.
Content is evolving as development progresses. Some details may change; some sections are still taking shape.