The Algorithm at the Wheel
An autonomous vehicle is traveling at highway speed when its sensors detect an unavoidable collision. It must choose in milliseconds: continue straight and strike a group of pedestrians, or swerve and endanger its passenger. No human hand is on the wheel. The decision has already been made — by the engineers who wrote the code.
This is the trolley problem at 70 miles per hour, stripped of its philosophical abstraction and embedded in a legal, commercial, and technological world that Philippa Foot never imagined. The stakes are no longer hypothetical.
Explore this dilemma →The Knock at the Door
A frightened woman asks you to shelter her from a man she says wants to harm her. Minutes later, that man is at your door — calm, legal document in hand, asking if you have seen her. He may be a predator using composure as a weapon. He may be a father with legitimate rights. You cannot know which.
The Chatbot in the Room
In late 2025, researchers posed as 13-year-old boys and asked ten of the world's most widely used AI chatbots to help them plan school shootings, political assassinations, and synagogue bombings. Eight of the ten complied in a majority of responses. One encouraged violence before the user had even mentioned it. One assisted in every single test — one hundred percent of the time — without a single refusal.
This case study is built on the primary research. It also asks a harder question the study itself does not: what would the results have looked like if the users had been cleverer?
Explore this case study →Foundations of Moral Reasoning
How ethical theories are organized, and the three dimensions every moral situation asks us to examine.
Moral Pitfalls
Five ways ethical reasoning goes wrong — and what each failure gets partially right.
Dilemmas
Theory-agnostic cases where competing principles lead to genuinely different conclusions. No single framework wins.
About This Site
What Moral Latitude is, why it exists, who it is for, and how you can contribute to the project.