Achieves <400ms latency for real-time feedback with Vertex AI
Scales to thousands of concurrent users on Google Kubernetes Engine
Cuts model deployment time from hours to minutes with Vertex AI and Google Kubernetes Engine
Accelerates scientific validation with BigQuery
Secures sensitive neuro-data with Cloud IAM
BrainLife unlocks peak cognitive performance for all with accessible, personalised neuro-wellness tech.

Brain-Life works to make intangible mental states measurable, understandable, and improvable for everyone. Whether to prevent driver drowsiness or help students maintain attention, the goal is to replace subjective self-reports with objective data. The company's flagship product, the Focus+ headset, captures high-frequency multimodal data to detect cognitive states such as fatigue or stress in real time. When the system detects a loss of attention, it triggers personalised interventions — such as adaptive soundscapes, breathing exercises, or specific neurofeedback tasks — to guide the user back to a focused state.
Effective neurofeedback requires a response in under 400 milliseconds to accurately train the brain. With a legacy infrastructure, the system struggled to meet this threshold, and users often received the auditory or task-based cues seconds after their attention had already shifted. This created inconsistent feedback that confused users and invalidated the training loop.
Storing and managing the heavy time-series data locally was also expensive and inefficient. The team spent valuable hours patching servers and manually managing capacity during usage spikes, rather than refining the AI models that are essential to the product's success. To scale from a pilot to a commercial platform, Brain-Life needed an architecture capable of handling the speed and complexity of the human mind without the operational overhead.
Deploying our learning models used to mean constantly firefighting infrastructure issues. With Google Cloud, we can stop doing server maintenance and start spending our engineering effort on building models and improving the user experience.
Chi-Thanh Vi
CEO and Co-founder, BrainLife
BrainLife migrated its platform to Google Cloud to bridge the gap between data collection and user intervention. Supported by experts at SIB Swiss Institute of Bioinformatics, the team redesigned their streaming pipelines to handle the high-throughput of live neurodata. With this new architecture, when a user wears the Focus+ headset, raw EEG and PPG data stream directly into a microservices environment orchestrated by Google Kubernetes Engine (GKE). GKE manages the ingestion stream, automatically scaling resources to handle thousands of concurrent sessions without the latency spikes that held back BrainLife's legacy system.

Once the data is ingested, Vertex AI then takes over, processing the signals with custom models that recognise the unique brain activity patterns of that specific user. This allows the system to accurately classify cognitive states — distinguishing between a moment of distraction and deep thought — and trigger the appropriate soundscape or exercise in real time. The entire loop, from signal to sensation, now happens in under 400 milliseconds.
To keep these personalized models current, the team hosts software build pipelines on Compute Engine. This workflow allows the team to push updates and new model versions to production rapidly, ensuring users always benefit from the latest improvements.
Additionally, the new infrastructure has accelerated the scientific cycle from hypothesis to validation. All multimodal data is securely archived in Cloud Storage, creating a rich repository of neuro-data. Researchers then use BigQuery to analyze the extensive datasets, uncovering population-level insights and refining algorithms at speeds that were previously impossible.
With Vertex AI and GKE, we cut our model deployment time from hours to minutes. We no longer spend time configuring servers; we spend it refining the algorithms that help our users focus.
Chi-Thanh Vi
CEO and Co-founder, BrainLife

By leveraging Cloud Logging to monitor system health across the entire stack, the engineering team has moved from reactive infrastructure patching to proactive innovation. This shift freed engineers to redirect their energy toward R&D, allowing the platform to scale and support thousands of concurrent users seamlessly without compromising the user experience.

Google Cloud plays a strategic role in our roadmap. It provides the security and scalability we need to transition from pilot-stage deployments to revenue-generating products, all while maintaining the scientific rigor our mission demands.
Chi-Thanh Vi
CEO and Co-founder, BrainLife
On the data privacy front, using Cloud IAM to enforce strict role-based access controls ensures that sensitive neuro-physiological data remains secure and compliant with governance standards. This security posture has become a key differentiator, helping BrainLife build trust with institutional partners and investors who demand rigorous data protection.
Looking ahead, Brain-Life plans to deepen its use of Google Cloud as it pursues regulatory approval for medical applications. The roadmap includes leveraging advanced generative capabilities within Vertex AI to create even more adaptive, personalized interventions, further blurring the line between consumer wellness and clinical therapy.
Brain-Life is a neurotechnology company developing multimodal cognitive monitoring and intervention systems. Its core product, Focus+, combines wearable sensing with cloud-based AI analytics to help individuals and organizations understand, train, and improve mental states.
Industry: Healthcare and Life Sciences
Location: Vietnam
Products: Vertex AI, Google Kubernetes Engine, Cloud Storage, Compute Engine, Cloud Logging, Cloud IAM, BigQuery