top of page
cube_w_bg.png

AI SOC

Summit

March 3, 2026

Tysons, VA

Hyatt Regency

7901 Tysons One Pl

Tysons, VA 22102

March 3, 2026

09:00 - 17:00

For Security Teams who use AI and defend AI deployments

Break AI systems before adversaries do. Build pipelines you can run on Monday. Red team models at scale. Bridge the gap between your compliance frameworks and your actual SOC operations.

 

No vendor pitches. No AI theory. Just practitioners showing what works, what fails, and what's actually dangerous, then handing you the tools to do something about it.

 

One day. Two tracks. Walk out with something you can use.

 

Compete in hands-on hackathons that pit human intuition against fully autonomous security operations. Or go deep in breakout sessions where you'll see working prototypes, run live attacks, and stress-test the AI tools you're already deploying.

Speakers

Ajit Gaddam  Head of Fraud, Financial Crimes, and Product Security, HealthEquity

Ajit Gaddam

Head of Fraud, Financial Crimes, and Product Security, HealthEquity

Monzy Merza  CEO & Co-Founder, Crogl

Monzy Merza

CEO & Co-Founder, Crogl

Andrew Heibel

Andrew Heibel

AI Researcher

Brennan Lodge

Brennan Lodge

Founder, BLodgic Inc.

Raja Sekhar Rao Dheekonda

Raja Sekhar Rao Dheekonda

Distinguished Engineer, Dreadnode

Gary Lopez

Gary Lopez

Founder, Tinycode

Eric Zietlow

Eric Zietlow

Platform Lead, DeepTempo

1757950646231_edited.jpg

Ed Albanese

Founder & CEO, ThirdLaw

Agenda

09:00 - Welcome and Keynote

10:30 - Hackathon / Breakouts

12:00 - Lunch

13:00 - Hackathon / Breakouts

17:00 - Happy Hour

DSC00642_edited.jpg

Evolution, Not Revolution: Building the AI-Ready SOC

Ajit Gaddam

Ajit Gaddam

Head of Fraud, Financial Crimes, and Product Security, HealthEquity

Monzy Merza

Monzy Merza

CEO & Co-Founder, Crogl

95% of AI projects fail to deliver measurable impact. The problem isn't the models, it's the operationalization. In this practitioner-led keynote, we cut through the vendor hype to answer three questions: What's actually working in AI SOC deployments? What are the real considerations for success that most vendors won't tell you? And how do you build for the future without repeating the mistakes of the past? Featuring security leaders from financial services, enterprise, and platform providers, we'll challenge assumptions about data normalization, explore why agents are neither users nor admins, and make the case for human capacity multiplication over headcount reduction. You'll leave with a practical framework for evolving your SOC, not retrofitting AI onto broken processes, but re-architecting for what's next.

How to Break AI Systems (Before Someone Else Does)

Gary Lopez

Gary Lopez

Founder, Tinycode

AI systems are failing in production, and traditional security testing is missing the problems that matter most. The threat landscape has shifted from simple chatbot bypasses to sophisticated AI-orchestrated cyber espionage. Recent disruptions of state-sponsored campaigns reveal a new reality: attackers are now using agentic TTPs to automate 80-90% of the attack lifecycle—executing reconnaissance, exploit generation, and data exfiltration at a scale that far outpaces human defenders. This presentation explores why AI systems fail to distinguish between instructions and data, creating a fundamental architectural risk. We will demonstrate live attacks, including hidden prompts in documents, AI agent goal manipulation, and privacy violations. You’ll leave with practical methods for testing your own systems, an understanding of high-risk vulnerabilities, and a toolkit of publicly available security resources. All attendees will also gain access to our AI Red Teaming practice platform to continue developing their hacking skills against vulnerable AI applications after the session.

RAGe Against the Cybersecurity Machine

Brennan Lodge

Brennan Lodge

Founder, BLodgic Inc.

Security teams keep getting asked the same question in different clothing: “Show me you’re doing the control,” not just that you wrote it down. This talk shows a practical way to bridge GRC requirements and day-to-day cybersecurity defense work using a RAG workflow plus an MCP-style tool interface to generate three operational artifacts from a control statement: (1) an investigation checklist aligned to what an auditor or regulator will actually ask for, and (2) an evidence bundle with timestamps, inputs/outputs, and an auditable change log. We will build a tiny, reproducible pipeline that ingests a small corpus of policies/controls (SOC 2 / ISO 27001 flavored examples), retrieves relevant internal context, and emits structured outputs that the SOC can execute and defend. The “good, bad, and ugly” is included and where RAG fails (stale controls, ambiguous language, conflicting sources), how prompt injection shows up in policy and ticket text, and what guardrails make the outputs safe enough to operationalize (source-citation requirements, constrained schemas, allowlisted tools, and human approval gates). Attendees leave with a working pattern they can adapt to their environment and a repo they can run locally.

Designing Real-Time LLM Agents for Complex
Interactive Systems

Andrew Heibel

Andrew Heibel

AI Researcher

Large Language Models are rapidly moving beyond static chat interfaces into autonomous agents capable of perceiving, reasoning, and acting in real time. We will explore the design and implementation of an LLM-driven agent that operates continuously in interactive environments — from playing video games to simultaneously performing complex cybersecurity tasks.

186 Jailbreaks in 137 Minutes: Why AI Red Teaming Must Industrialize

Raja Sekhar Rao Dheekonda

Raja Sekhar Rao Dheekonda

Distinguished Engineer, Dreadnode

AI systems are no longer single-prompt, text-only interfaces. They are multimodal, stateful, tool-using agents deployed in production environments. Yet, most AI red teaming today remains manual, ad hoc, and fundamentally unscalable. Manual assessments alone are no longer sufficient. Scalable, instrumented, and continuously-running offensive systems are required to keep pace with modern AI deployments. Offense must drive defense, or security and safety will always lag behind capability. See the impact of scalable AI red teaming first-hand as we orchestrate a systematic AI risk assessment of Llama Maverick-17B-128E-Instruct that uncovered 186 distinct jailbreaks with a 78% attack success rate in just over two hours. Rather than relying on handcrafted prompts, we apply MLOps principles to automate attack generation, evaluation, pruning, and analysis across text and multimodal inputs. Hear key observations from running algorithmic attacks like Crescendo, GOAT, and TAP, which expose critical weaknesses in multi-turn safety training, low-query attack detection, and cross-modal intent reasoning. The most concerning attacks aren’t the obvious ones, but the subtle, efficient jailbreaks that are nearly indistinguishable from normal usage. It remains offensive security’s responsibility to evolve the processes used to effectively and efficiently assess risk and inform defenses. This real-world case study in scalable, automated red teaming is a production example of what's possible when AI is applied to offense at-scale.

Detecting Evasive Command & Control: A Practical Deep Learning Approach for SOC Teams

Eric Zietlow

Eric Zietlow

Platform Lead, DeepTempo

Command and control channels are increasingly designed to evade traditional detection: they mimic legitimate protocols, respect rate limits, and blend into operational traffic patterns. Signature-based tools miss novel variants. Anomaly detection flags too many false positives or misses attacks that stay "within baseline." This session demonstrates a different approach: using a purpose built deep learning foundation model aka a LogLM that learns the structural signatures of malicious behavioral timelines rather than relying on deviations or known patterns. We'll show live detection of challenging C2 scenarios that would evade typical SIEM and NDR deployments, explain why the behavioral timeline structure reveals intent even when individual flows appear normal, and demonstrate practical SOC integration including AI-powered investigation workflows.

Runtime AI Security

1757950646231_edited.jpg

Ed Albanese

Founder & CEO, ThirdLaw

Security teams are being asked to govern and secure a new class of software: applications built with, or augmented by, LLMs. These systems behave differently than conventional software, with non-deterministic outputs, prompt- and context-driven execution paths, rapid model and version drift, opaque third-party components, and new pathways for sensitive data exposure. Existing AppSec, SIEM, EDR, and cloud security tools remain necessary, but they are not designed to continuously observe and enforce security and compliance across AI-in-the-loop application flows. The core challenge is that traditional telemetry captures events, while LLM risk often lives in intent, context, and outputs. In this session, I’ll distinguish monitoring security events (what happened) from monitoring AI behavior (why the system acted and what it produced), and show why this gap matters for detection, response, and governance. We’ll walk through LLM-specific attack and failure modes that can look “normal” in conventional logs: prompt injection and policy bypass, sensitive data exposure via retrieval and tool calls, agentic privilege misuse, and silent drift in model behavior that changes risk over time. Attendees will leave with a practical framework and implementation checklist to reduce AI application risk while improving operational clarity.

hackathon_image.png

Hackathons: Suit up with AI

Time to put your skills (and AI) to the test in hands-on hackathons designed to push the boundaries of AI-powered security operations. Suit up with (or without AI) and challenge the status quo to find the real winner between human driven and fully autonomous Security Operations. Compete alongside fellow security practitioners, form alliances, and tackle real-world challenges as you showcase creative approaches to enhancing the SOC. Prizes awarded for 1st, 2nd, and 3rd place in each hackathon—may the best approach find victory.

FIghter Jet3_edited.jpg

Join us for the first ever AI SOC Summit

Frequently Asked Questions

Gov Computer2.jpg

Make every security team member as effective as the entire team

Crogl is a compound AI system for security operations. By learning your workflows and working directly with non-normalized data, Crogl autonomously investigates alerts, works tickets, and executes threat hunts with full auditability.

500 Terry Francine Street

San Francisco, CA 94158

[email protected]

Tel: 123-456-7890

Fax: 123-456-7890

  • White Facebook Icon
  • White Twitter Icon
  • White Instagram Icon

© 2035 by Horizon. Powered and secured by Wix

bottom of page