The intersections are where the most consequential governance gaps hide — and where AI Resilience Lab operates.
Zürich · Switzerland
My work sits at the intersection of disciplines that rarely appear in the same profile. That is not an accident — it is the result of fifteen years of deliberate practice across fraud investigation, risk management, data governance, and AI ethics. Each discipline sharpens the others. Together they produce a lens that is genuinely uncommon in the AI governance space.
I have been working on AI ethics and governance since 2016 — before most of the frameworks that now define the field were written, before the EU AI Act was proposed, before agentic AI was a board-level concern. Nine years of direct engagement means I tracked the shift from theoretical alignment questions to deployment risk in real time.
A decade of computer science study enables me to work at the architecture level — not just the policy level. Governance designed at the policy layer describes what should happen. Governance designed at the architecture layer enforces it.
Five years in risk management and data governance added the operational layer that purely technical or purely investigative work cannot provide. I have seen how organisations actually govern data assets, manage model risk, and design operational controls — and more importantly, where those controls consistently fall short regardless of the governance program's maturity.
The forensic investigation background adds the adversarial layer that most AI governance work lacks. Fraud investigation teaches you to think like the threat, not the auditor. You do not ask what went wrong — you ask what made it possible.
AI Resilience Lab applies that full stack to a specific domain: the governance of agentic AI systems at the execution layer — the moment the agent acts, the sequence of tool calls it makes, and the decisions it takes between the policy document and the outcome. That is where the existing frameworks stop. It is where the real risk lives.
Primary targets: UBS, Swiss Re, Zurich Insurance, Julius Baer, Vontobel, private banks, and the broader Swiss and European financial sector. Organisations where the regulatory stakes of AI control failure are highest and the governance maturity gap is most acute.
EU AI Act high-risk system rules take effect August 2026 — the first significant enforcement cycle. FINMA model risk guidance governs the Swiss financial sector. I map agentic AI deployments against these frameworks and identify the governance gaps none of them yet prescribe at the execution layer.
Beyond the primary regulatory context, I track state-of-the-art frameworks globally and translate leading research — including the 2025 AI Agent Index, the MIT AI Risk Repository, and NVIDIA's agentic safety framework — into practitioner-grade governance tools for regulated industry.
The exploitation — intentional or emergent — of agentic AI systems to produce unauthorized outcomes. A risk category that existing fraud frameworks do not address because they were built for human actors. This is the named category I am developing as a formal contribution to the field.