Our preliminary research suggests that frontier AI espionage systematically undermines the international coordination necessary to address AI's global catastrophic risks. This research maps both the severity and probability of espionage on AI safety through a mixed-method approach combining case study analysis, academic literature review, systematic review, and unstructured expert interviews. Findings are strengthened through triangulation of sources from think tanks, trade documents, cyber incident databases, and court documents.
Cold War case studies demonstrate the severity: espionage erodes trust, accelerates arms races, and deprioritizes safety. Our analysis of actors, motivations, vectors, and targets qualitatively assesses the probability by examining who conducts frontier AI espionage, why they're motivated to do so, how they execute it, and what they steal.
The probability assessment reveals troubling findings. State actors are highly motivated by strategic imperatives to achieve AI dominance. Individual actors demonstrate that financial incentives can be sufficient. Private intelligence firms offer espionage-as-a-service. Meanwhile, verified incidents show that basic techniques, company credentials, and USB drives, not sophisticated hacking, successfully extract proprietary information. The combination of strong motivations and low technical barriers suggests a high probability of espionage targeting frontier AI.
Yet half the countermeasures we identify depend on voluntary firm-level practices rather than government policy, leaving critical vulnerabilities where frontier AI is actually developed. The report identifies specific countermeasures to reduce this probability across the AI stack, from protecting model weights to securing hardware specifications.