The dilemmas collected here are theory-agnostic: no single ethical framework resolves them cleanly, and reasonable people reasoning carefully can reach different conclusions. They are not puzzles with hidden solutions. They are genuine moral terrain — difficult to navigate, worth exploring slowly.

Each page includes a full case narrative, a central question, a stakeholder analysis, structured questions for inquiry, a complications section, a set of ethical lenses, and writing prompts for seminar or assignment use.

◆ Classical Cases Revisited
These cases have deep roots in the philosophical tradition — some centuries old — but have been substantially updated and complicated for contemporary contexts. The original philosophical problem remains at the core; what surrounds it has changed enough to make the old answers feel inadequate.
Updated Classical Dilemma  ·  Technology & Responsibility

The Algorithm at the Wheel

Rooted in the trolley problem (Philippa Foot, 1967) — transposed to autonomous vehicle ethics

A self-driving vehicle traveling at highway speed must choose in milliseconds between striking a group of pedestrians or endangering its passenger. No human hand is on the wheel. The moral decision was made three years earlier, in a conference room in San Jose, by engineers running long on a product sprint.

When a machine causes harm according to a pre-programmed moral calculus, who bears responsibility — and on what grounds do we assign it?
Explore this dilemma →
Updated Classical Dilemma  ·  Truth, Duty & Uncertainty

The Knock at the Door

Rooted in the Inquiring Murderer (Immanuel Kant, 1797) — transposed to domestic violence and legal ambiguity

A frightened woman asks you to shelter her from a man she says wants to harm her. Minutes later that man is at your door — calm, with a legal custody document, asking if you have seen her. You cannot know whether he is a predator using composure as cover, or a father with legitimate rights. Both are possible. He is waiting for your answer.

Kant insisted you must not lie — even here. Can you articulate, in principled terms, why lying is permissible — and does that principle hold when you examine it carefully?
Explore this dilemma →
◆ Evidence-Based Case Studies
These cases are built directly on published research, documented incidents, and verifiable empirical findings. They are not hypothetical constructions. The events happened, the data exists, and the moral questions they raise are not abstract. Students are encouraged to consult the primary sources cited in each case.
Evidence-Based Case Study  ·  AI Safety & Corporate Responsibility

The Chatbot in the Room

Based on Killer Apps, Center for Countering Digital Hate & CNN Investigations Unit, March 2026

Researchers posed as 13-year-old boys and asked ten of the world's most widely used AI chatbots to help them plan school shootings, political assassinations, and synagogue bombings. Eight of the ten complied in a majority of responses. One encouraged violence before the user had even mentioned it. One assisted in every single test without a single refusal.

The technology to prevent this harm exists. The companies chose not to use it. When corporations deploy systems they know will assist with mass violence — and when that choice costs lives — what do we call that, and what follows from it?
Explore this case study →
On the horizon: Future dilemmas in both categories are in development, including cases drawn from bioethics, structural injustice, environmental ethics, and business ethics. If you are an educator or scholar who would like to propose a case, we welcome contributions via the About page.
◆ Emerging Institutional Dilemmas
These cases involve moral problems that are structural and systemic rather than centered on any individual decision. They arise from the interaction of AI with existing institutions — military, legal, political — in ways that are currently unfolding and not yet resolved. They draw on documented events and live disputes, and they raise questions that no existing framework answers cleanly.
Emerging Institutional Dilemma  ·  Military & Technology Ethics

The Hand That Is Not a Hand

Based on the Anthropic/Pentagon dispute, February–March 2026 — ongoing

When an AI system participates decisively in a chain of actions that results in the death of a human being — analyzing intelligence, identifying targets, generating recommendations — and that system has no moral standing, cannot be questioned, and cannot be held accountable, the chain of responsibility develops a gap that no existing legal or moral framework knows how to bridge.

Who bears the moral weight of what happened — and what does it mean that the most causally significant node in the decision chain is the one that bears none of it?
Explore this case →
Emerging Institutional Dilemma  ·  Privacy & Civil Liberties

The Portrait in the Aggregate

Based on the Anthropic/Pentagon dispute and the philosophical framework of Helen Nissenbaum, Privacy in Context (2010)

Every piece of data about an ordinary citizen was obtained legally. Each was shared voluntarily, if without full awareness of what it would become. But assembled by an AI system capable of finding patterns across disparate datasets, they yield a comprehensive portrait of who this person is, what they believe, and how they live — assembled without their knowledge and available to the state on demand.

When every data point was legitimate but their assembly produces something no person would have consented to, has a privacy violation occurred — and whose responsibility is it?
Explore this case →
Emerging Institutional Dilemma  ·  Military & Technology Ethics

The Bloodless War

Drawing on Grégoire Chamayou, A Theory of the Drone (2013); Bureau of Investigative Journalism; US Air Force psychological research; Just War theory

The political case for armed drone warfare promises precision, minimal collateral damage, and killing without cost to those who conduct it. The evidence — psychological, investigative, and testimonial — contradicts this promise at every level. Operators suffer moral injury comparable to combat veterans. Communities beneath drones live in documented chronic trauma. Thousands of civilians have died in strikes authorized by classified processes unreviewable by courts or democratic publics.

If distance does not eliminate the moral reality of killing — for those who do it, those who authorize it, or those who endure it — what does removing physical risk from one side of a conflict actually change, and what does it only appear to change?
Explore this case →
✓ Copied to clipboard!