About · Aljona Schwan

Forensic investigation specialist.
Risk specialist.
The disciplines converge.

The intersections are where the most consequential governance gaps hide — and where AI Resilience Lab operates.

Aljona Schwan — Founder, AI Resilience Lab Zürich · Switzerland
01 Background

My work sits at the intersection of disciplines that rarely appear in the same profile. That is not an accident — it is the result of fifteen years of deliberate practice across fraud investigation, risk management, data governance, and AI ethics. Each discipline sharpens the others. Together they produce a lens that is genuinely uncommon in the AI governance space.

I have been working on AI ethics and governance since 2016 — before most of the frameworks that now define the field were written, before the EU AI Act was proposed, before agentic AI was a board-level concern. Nine years of direct engagement means I tracked the shift from theoretical alignment questions to deployment risk in real time.

A decade of computer science study enables me to work at the architecture level — not just the policy level. Governance designed at the policy layer describes what should happen. Governance designed at the architecture layer enforces it.

Five years in risk management and data governance added the operational layer that purely technical or purely investigative work cannot provide. I have seen how organisations actually govern data assets, manage model risk, and design operational controls — and more importantly, where those controls consistently fall short regardless of the governance program's maturity.

The forensic investigation background adds the adversarial layer that most AI governance work lacks. Fraud investigation teaches you to think like the threat, not the auditor. You do not ask what went wrong — you ask what made it possible.

AI Resilience Lab applies that full stack to a specific domain: the governance of agentic AI systems at the execution layer — the moment the agent acts, the sequence of tool calls it makes, and the decisions it takes between the policy document and the outcome. That is where the existing frameworks stop. It is where the real risk lives.

15+
Years in fraud investigation & risk management
9
Years working on AI ethics & governance
5
Years in risk management & data governance
ZRH
Based in Zurich, Switzerland
02 How I work — the forensic methodology.
01
Find the gap, not the actor
The threat is not the attacker — it is the gap that made the attack possible. Every AI governance engagement starts with mapping what the system is technically prevented from doing, not just what it is supposed to do. Those are not the same question.
02
Adversarial scenario modeling
I design for failure before building for success. Mapping motive, means, and opportunity at the agent execution layer — identifying the sequences of tool calls that produce unauthorized outcomes — before a real deployment encounters them.
03
Governance in the architecture
A control that lives only in a policy document is not a control. My work embeds governance at the architecture level — permission boundaries, behavioral audit logging, human checkpoints — so that what the agent cannot do is enforced technically, not just described.
03 Focus areas
Regulated industries

Banking, insurance, and corporate technology in Switzerland and Europe

Primary targets: UBS, Swiss Re, Zurich Insurance, Julius Baer, Vontobel, private banks, and the broader Swiss and European financial sector. Organisations where the regulatory stakes of AI control failure are highest and the governance maturity gap is most acute.

Regulatory context

EU AI Act · FINMA · ISO 42001 · NIST AI RMF

EU AI Act high-risk system rules take effect August 2026 — the first significant enforcement cycle. FINMA model risk guidance governs the Swiss financial sector. I map agentic AI deployments against these frameworks and identify the governance gaps none of them yet prescribe at the execution layer.

Global frameworks

Singapore IMDA · WEF · OWASP · CSA MAESTRO · Stanford HAI

Beyond the primary regulatory context, I track state-of-the-art frameworks globally and translate leading research — including the 2025 AI Agent Index, the MIT AI Risk Repository, and NVIDIA's agentic safety framework — into practitioner-grade governance tools for regulated industry.

Named category

AI Agent Fraud — white paper in development

The exploitation — intentional or emergent — of agentic AI systems to produce unauthorized outcomes. A risk category that existing fraud frameworks do not address because they were built for human actors. This is the named category I am developing as a formal contribution to the field.