
ASAPP AI Transparency Portal
ASAPP is committed to responsible usage of artificial intelligence. We’re proud to be transparent about our use of both proprietary ASAPP models and external models from other providers.
Products
ASAPP is committed to responsible usage of artificial intelligence. We’re proud to be transparent about our use of both proprietary ASAPP models and external models from other providers.
AI Trust Resources
Inside CXP's GenerativeAgent® security: The framework behind safe and reliable AI
This article outlines how ASAPP's GenerativeAgent secures generative AI with a safety-centric framework that combines strict scope limits, privacy-first data handling (including zero data retention and aggressive redaction), layered guardrails against prompt injection and harmful hallucinations, grounding requirements for reliable outputs, and classic security controls (encryption, access boundaries, monitoring/testing) to protect data and ensure safe, compliant AI behavior.
Raising the bar: AI agent safety and security in financial services
This article argues that AI agents in financial services must go beyond basic safety guardrails by providing full visibility (audit trails + performance analytics), real-time monitoring, deep human-AI oversight (not just escalation), and robust testing/fine-tuning tools so that high-stakes use cases are safe, secure, measurable, and controllable before and after deployment.
Redaction: A cornerstone of our privacy-by-design approach
This article explains how automated redaction is a core AI capability in ASAPP's privacy-by-design approach, using AI to detect and remove sensitive data in real time, enforce data-minimization, reduce exposure risks, and support compliance requirements—making it clearly relevant to AI security, privacy, and safety controls, not product performance.
The evolution of input security: From SQLi & XSS to prompt injection in large language models
This article explains how traditional input security threats (SQLi, XSS) have evolved into AI-native risks like prompt injection in LLMs, and describes the need for new defensive techniques (input validation, context control, isolation, and monitoring) to protect AI systems from manipulation, data leakage, and unsafe behavior—making it directly relevant to AI security and AI safety capabilities, not product performance.
Preventing hallucinations in generative AI agent: Strategies to ensure responses are safely grounded
This article explains how ASAPP reduces hallucinations in generative AI agents by constraining model behavior through grounding, controlled context, validation layers, and oversight mechanisms, ensuring safer, more reliable AI outputs without relying on raw model creativity.
AI security and AI safety: Navigating the landscape for trustworthy generative AI
This article outlines how trustworthy generative AI requires a combined AI security and AI safety approach, addressing AI-specific risks such as misuse, unsafe outputs, and loss of control through layered safeguards, governance, monitoring, and responsible design principles rather than relying solely on model accuracy or performance.