The Control Architecture Gap That Compliance Reviews Miss
A free white paper on the exploitation — intentional or emergent — of agentic AI to produce unauthorized outcomes. How it works, why standard fraud controls don't catch it, and what governance architecture needs to change.
Free on publication. No spam. Unsubscribe at any time.
AI Agent Fraud is a named risk category distinct from conventional fraud. Where conventional fraud requires a human actor, AI Agent Fraud uses the agent itself as the vector — operating within sanctioned permission boundaries while producing outcomes no principal authorized. The white paper maps the control architecture gaps that make this possible, with direct regulatory mapping to FINMA Guidance 08/2024, EU AI Act Annex III, and Basel model risk principles.
What distinguishes AI Agent Fraud from conventional fraud typologies. The role of goal-based logic, permission boundaries, and transaction chaining in enabling outcomes no human principal explicitly authorized.
Where standard fraud controls fail against an agentic vector: why typology-based detection doesn't work, where the authorization chain breaks, and what behavioral audit logging needs to capture that current implementations don't.
Mapping AI Agent Fraud risk to FINMA Guidance 08/2024 operational risk requirements, EU AI Act Annex III high-risk system obligations, GDPR Article 32 technical measures, and Basel/EBA model risk expectations.
What an AI Agent Fraud-aware governance architecture looks like in practice: permission boundary design, behavioral audit requirements, human oversight placement, and residual risk documentation.
Aljona applies forensic investigation methodology to agentic AI governance — finding the control architecture gaps that compliance reviews miss. Prior to founding AI Resilience Lab, she led data governance and compliance at scale for major institutions and conducted data exfiltration investigations at EY Forensic & Integrity Services.
Full background →