Issue #1. Ethical AI Wasn’t Designed for Disasters
Why read this newsletter?
Every disaster movie starts with someone ignoring a scientist.
Artificial intelligence is everywhere. Whether organisations are already using it or still exploring the possibilities, there is no shortage of new tools and terms to keep up with - from systems like ChatGPT to concepts such as large language models, prompt engineering, hallucinations, and retrieval-augmented generation (RAG). The list grows daily.
AI is also increasingly present in disaster and emergency management - from early warning systems and damage assessment to logistics planning and decision support. But what does this actually mean in practice?
In crisis settings, decisions are made under pressure, with incomplete information, and often with life-altering consequences. The promise of AI in these environments is clear: speed, scale, and analytical reach beyond human capacity. But with that promise comes real risk. Errors, bias, and opaque systems can be introduced precisely where there is the least room for them.
So what are we going to do about this?
Ethical guidance for AI exists across many documents, organisations, and initiatives - but it is fragmented and uneven in quality. Even less clear is how these principles hold up in crisis environments, where consent is constrained, accountability is scattered, and “human-in-the-loop” may be more symbolic than real.
This newsletter exists to sit in that gap - without claiming to fill it.
It does not argue for or against the use of AI in crisis contexts. Instead, it slows things down enough to ask how AI is being used, who it serves, and who bears the risk when it fails.
Why do disasters change the ethical rules so much anyway?
Disasters create unusual environments. They compress time, concentrate power, and narrow the range of choices available to individuals and communities. Decisions that would normally involve consultation, consent, or deliberation are often made quickly, by a small number of actors, under conditions of uncertainty.
This matters for AI because many ethical assumptions baked into technology design do not hold in crisis settings. Consent may be nominal or impossible. Data may be collected from people with little ability to refuse, limited understanding of how it will be used, or no control over its future reuse. Accountability can become scattered as responsibility is distributed across agencies, vendors, models, and workflows.
In these contexts, AI systems do not simply support decisions. They can shape strategy - influencing what is seen as urgent, relevant, or even possible. A risk score, forecast, or generated summary can quietly steer attention and resources, even when a human remains formally in charge.
The challenge is not only whether an AI system is accurate, but whether its influence is visible, contestable, and appropriate for the moment in which it is used.
What may be acceptable in planning or low-stakes settings can become ethically fraught when lives, livelihoods, and trust are on the line.
How will this newsletter approach this?
Ethical risks in disaster settings rarely appear as isolated technical failures. More often, they emerge from predictable patterns across tools, organisations, and crisis contexts.
To make those patterns visible, this newsletter examines AI developments through four recurring lenses - each highlighting a different way ethical risk tends to surface when AI is introduced into high-stakes, human-centred decision-making.
These lenses are not exhaustive, but they capture the most common ways ethical risk surfaces when AI enters crisis decision-making.
The four lenses
1. Power & Agency
Who decides, who benefits, and who bears the risk?
In crisis settings, AI systems can acquire authority they have not earned. Confident forecasts, scores, or summaries may narrow the range of options under consideration, embedding value judgements about what matters most - including implicit assumptions about fairness and whose needs are prioritised.
In time-critical environments, these outputs can shortcut human deliberation rather than support it, shifting decision-making power away from people and towards systems whose assumptions are rarely explicit or contested at the moment decisions are made.
What appears to be neutral technical support can, in practice, define urgency, shape priorities, and influence who receives assistance - with consequences that are not evenly distributed.
2. Data & Consent
How is data collected, owned, reused, and protected?
Human-centred crises are environments where meaningful consent is often constrained or impossible. Ethical data practices designed for stable settings - informed consent, clear purpose limitation, restrictions on reuse - can quickly erode under emergency conditions.
Bias can be introduced not only through what data is collected, but through whose data is missing, outdated, or over-represented. The ethical risk is not limited to collection itself, but extends to what happens afterwards: when crisis data is retained, repurposed, or combined in ways that expose communities to harm long after the emergency has passed.
3. Accountability & Governance
Who is responsible when AI influences life-critical decisions?
When AI systems are embedded into complex, multi-agency crisis workflows, responsibility can become blurred. Decisions may be shaped by a combination of data pipelines, models, vendors, internal teams, and partner organisations.
Influence ≠ responsibility.
Systems can shape outcomes without being accountable for them, while organisations may retain formal accountability without the power, time, or transparency needed to intervene meaningfully.
Calls for transparency or explainability do not resolve this on their own if decision-makers cannot realistically challenge, override, or pause an AI system under operational pressure. When something goes wrong, it is often unclear who can intervene - or where accountability ultimately sits.
4. Operational Reality
What actually happens on the ground - not what the model promises?
AI systems often rely on historical or aggregated data that struggles to keep pace with rapidly changing crisis conditions. Infrastructure damage, population movement, political constraints, and access limitations can quickly invalidate model assumptions.
In these moments, AI rarely fails loudly. Instead, it risks failing quietly - producing outputs that appear reasonable, explainable, or technically sound while no longer reflecting the realities facing responders and affected communities.
Does this feel uncomfortably familiar? Then you’re probably in the right place.
Rather than treating these lenses as abstract principles, this newsletter uses them as practical tools. Some issues will explore a single lens in depth; others will examine specific cases through several lenses at once.
As this series begins, the aim is not to provide answers, but to ask better questions - together.
Week one question
If AI systems reshape human-centred crisis decision-making in subtle ways, how would we notice - and what would it take to intervene in time?
Leave your thoughts in the comments and I’ll see you in a fortnight.
RB.