About
I work on ethical questions around artificial intelligence and decision-making in crisis and high-risk contexts.
My interest is less in what systems promise to do in theory, and more in how they are actually used - especially in environments where time is short, information is incomplete, and the consequences of decisions are real and often irreversible.
Background
My background is in disaster management, business continuity, and organisational risk.
I spent 16 years at PwC in internal roles across business continuity, organisational resilience, and risk management. This included building and maintaining the UK firm’s ISO 22301 aligned business continuity management system, alongside work across related management systems aligned to ISO 9001 (quality), ISO 20000-1 (IT service management), ISO 27001 (information security), and BS 10008 (legal admissibility of electronic information).
This work involved supporting governance and assurance activities, coordinating organisational incident responses when plans were tested by real events, and ensuring that formal standards translated into arrangements that actually functioned under operational pressure.
This experience also shapes how I engage with emerging AI governance standards, including ISO/IEC 42001 - not as abstract frameworks, but as systems that must operate under real organisational and decision-making constraints.
These roles involved internal ownership of systems rather than short-term advisory input, with responsibility for arrangements that had to work in real conditions, not just on paper. They also required close collaboration with client-facing teams to assess risk, controls, and assurance positions across a wide range of engagements.
Together, this experience provided direct insight into how risk is understood, communicated, and acted upon in practice - inside complex organisations with competing priorities, imperfect information, and uneven accountability.
AI and decision-making under pressure
More recently, I completed an MSc in Artificial Intelligence (with distinction). My research examined the practical performance and limitations of large language models, with a particular focus on how AI systems shape human judgement under pressure. I have hands-on experience working with data, code, and machine learning models, which grounds my thinking about AI limitations, failure modes, and the gap between design assumptions and real-world behaviour.
My ethics and risk work is rooted in technical experience building and evaluating AI systems, not just analysing them at a distance.
Through research, writing, and advisory work, I examine how AI influences what is seen as urgent, relevant, or actionable; whose perspectives are amplified or constrained by automated systems; and how responsibility is distributed when decisions are shaped by models, data, and tools rather than individuals alone.
Current work
I am an associate of Dauntless Group, working with the team in developing interdisciplinary services and innovation across ethical AI, risk, and decision-making contexts.
About this site
Ethical assumptions that may hold in stable environments - around consent, oversight, or “human-in-the-loop” safeguards - are often strained or disrupted when speed and uncertainty dominate. In crisis and high-stakes settings, these dynamics matter.
This site is a space to think carefully about those tensions.
Not to argue for or against the use of AI in crisis contexts, but to examine how it is being introduced, governed, and trusted in practice - and what that means for accountability, agency, and risk.
If you’re interested in slowing things down just enough to ask better questions about AI and decision-making under pressure, you’re very welcome here.