Four proprietary frameworks developed through applied forensic analysis of how agentic AI systems fail — not how they are theorised to fail. Each maps directly to regulatory requirements under the EU AI Act, FINMA Guidance 08/2024, and ISO 42001.
Governance built for autonomous actor behavior. Standard governance assumes rule-following. Agentic systems operate on goal-based logic — which means every control assumption built for static systems needs to be re-examined. The Behavioral Governance Framework provides three interdependent pillars that address how intent is preserved when delegated, where permission boundaries hold under goal-driven logic, and how behavioral drift is detected before it becomes a reportable control failure.
The AI Agent Governance Stack traces accountability from business objective to human oversight — closing the sequence gaps where agentic systems acquire capability without corresponding control. Most agentic deployments fail not because individual controls are absent, but because the stack has gaps between layers: capability is granted that no permission boundary constrains, behavioral audit logging is absent at the layer where autonomous decisions are made, and human oversight operates on aggregated outputs rather than on the decision chain that produced them.
Five categories of control failure, derived from forensic analysis of agentic AI incidents and near-misses. Where most risk taxonomies describe what went wrong after the fact, this taxonomy maps to architectural gaps — exploitable by adversarial actors or emergent agent behavior before a compliance review would surface it. The taxonomy is designed as a pre-deployment and in-deployment diagnostic, not a post-incident classification system.
AI Agent Fraud is a named risk category: the exploitation — intentional or emergent — of agentic AI to produce unauthorized outcomes. It is not a subset of conventional fraud. Where conventional fraud requires a human actor to circumvent controls, AI Agent Fraud uses the agent itself as the vector — operating within sanctioned permission boundaries while producing outcomes no principal authorized. The plumbing is the problem: the agent can execute a sequence of individually authorized actions whose combined effect is an outcome that no authorization chain approved.
AI Resilience Lab applies these frameworks in active client engagements — control architecture reviews, pre-deployment risk assessments, and behavioral governance design for regulated institutions in Switzerland and the EU.