Forthcoming · Q3 2026

AI Agent Fraud

The Control Architecture Gap That Compliance Reviews Miss

A free white paper on the exploitation — intentional or emergent — of agentic AI to produce unauthorized outcomes. How it works, why standard fraud controls don't catch it, and what governance architecture needs to change.

In preparation — publication Q3 2026

Free on publication. No spam. Unsubscribe at any time.

01

What This Paper Covers

AI Agent Fraud is a named risk category distinct from conventional fraud. Where conventional fraud requires a human actor, AI Agent Fraud uses the agent itself as the vector — operating within sanctioned permission boundaries while producing outcomes no principal authorized. The white paper maps the control architecture gaps that make this possible, with direct regulatory mapping to FINMA Guidance 08/2024, EU AI Act Annex III, and Basel model risk principles.

The Risk Category Defined

What distinguishes AI Agent Fraud from conventional fraud typologies. The role of goal-based logic, permission boundaries, and transaction chaining in enabling outcomes no human principal explicitly authorized.

Control Architecture Gaps

Where standard fraud controls fail against an agentic vector: why typology-based detection doesn't work, where the authorization chain breaks, and what behavioral audit logging needs to capture that current implementations don't.

Regulatory Exposure

Mapping AI Agent Fraud risk to FINMA Guidance 08/2024 operational risk requirements, EU AI Act Annex III high-risk system obligations, GDPR Article 32 technical measures, and Basel/EBA model risk expectations.

Governance Design

What an AI Agent Fraud-aware governance architecture looks like in practice: permission boundary design, behavioral audit requirements, human oversight placement, and residual risk documentation.

Author
Aljona Schwan
Founder, AI Resilience Lab · AI Control Failure Specialist

Aljona applies forensic investigation methodology to agentic AI governance — finding the control architecture gaps that compliance reviews miss. Prior to founding AI Resilience Lab, she led data governance and compliance at scale for major institutions and conducted data exfiltration investigations at EY Forensic & Integrity Services.

Full background →
15
yrs Forensic investigation & compliance
4
Proprietary governance frameworks
CH · EU
FINMA · EU AI Act · ISO 42001

The AI Vulnerability Your Governance Program Has Not Planned For

The Monitoring Gap NIST Cannot Close