◆ The Case

It is a Tuesday morning in November. A self-driving vehicle — fully autonomous, no steering wheel, no brake pedal — is carrying a single passenger along a rain-slicked interstate at 68 miles per hour. The passenger, a 34-year-old nurse named Mara, is reviewing patient charts on her tablet. She is not watching the road. She has no reason to.

Without warning, a group of six construction workers steps onto the highway from behind a stopped truck. The workers believed they were in a protected zone. They were wrong, or the zone markings were inadequate — that detail will be disputed in court. The vehicle's sensor array processes the obstacle in 11 milliseconds. No human reflex could match that. No human is being asked to.

The system calculates two options: maintain course and strike the workers, or swerve hard into the concrete median barrier, which will likely kill or gravely injure Mara. There is no third option. The road is too wet, the speed too high, the gap too narrow.

In the 11 milliseconds before the outcome is determined, the algorithm executes the decision its engineers programmed it to make three years earlier, in a conference room in San Jose, during a product development sprint that ran long into the evening.

The question of what the vehicle should do in this situation has already been answered. The question we are left with is whether it was answered correctly — and by whom it had any right to be answered at all.

When a machine causes harm or prevents it according to a pre-programmed moral calculus, who bears responsibility — and on what grounds do we assign it?

What Is at Stake, and for Whom?

Before asking what the right answer is, it is worth mapping the moral landscape — identifying who has a stake in this situation, and what each stands to lose or gain beyond the immediate physical outcome.

Mara, the Passenger

She consented to ride in an autonomous vehicle. But did she consent to a specific calculus about the value of her life relative to others? Was she ever asked?

The Construction Workers

Their lives were assigned a weight in an equation they knew nothing about, by people they never met.

The Engineers

They made a consequential moral decision late on a Tuesday evening and embedded it in shipping code. Were they qualified to make it? Were they ever asked if they wanted to?

The Manufacturer

A corporation that profited from the sale of this vehicle. Does liability follow profit? What did their terms of service actually say?

Regulators

Government agencies approved this vehicle for public roads. Did any of that testing include a review of the ethical framework embedded in its decision logic?

The Public

Millions of people share roads with autonomous vehicles operating on undisclosed moral algorithms. Their exposure was never consented to, discussed, or voted upon.

Questions for Inquiry

The following questions are designed to be worked through in sequence. Each one opens a line of inquiry rather than closing one. There are no correct answers here — only more and less carefully reasoned ones.

  1. The engineers who wrote the collision algorithm made a moral decision — they determined, in effect, whose life the vehicle would prioritize. Should that decision have been made by engineers at all? If not them, then who?Consider: governments, ethicists, affected communities, the vehicle owners themselves, or some combination.
  2. Mara purchased and entered this vehicle voluntarily. Does her consent to ride in it constitute implicit consent to the moral choices embedded in its programming? What would genuine informed consent look like in this context — and is it even achievable?Consider what it would actually mean to read and understand a document disclosing the vehicle's ethical decision logic before purchasing.
  3. The construction workers did not consent to being parties in anyone's algorithmic calculus. Does the absence of their consent change the moral character of what happens to them — as compared to a scenario where a human driver made the same split-second decision?Consider whether the source of a harmful decision — human reflex vs. pre-programmed code — changes its moral character.
  4. Suppose the algorithm was programmed to always protect the maximum number of lives. In this case, it would sacrifice Mara to save six workers. Now suppose it was programmed to always protect the passenger. Which default seems more defensible — and who should get to choose?Consider whether a single universal default is even coherent, or whether different contexts demand different frameworks.
  5. A human driver who swerved and killed their own passenger to save six workers might be considered heroic — or might face criminal charges. How should law and moral judgment handle the same outcome when the agent is not a person but a program?Consider whether moral and legal responsibility require a conscious agent, and what happens to our frameworks when that agent is absent.
  6. Autonomous vehicles are statistically far safer than human drivers. If wide adoption would save tens of thousands of lives per year, does that aggregate benefit change the moral calculus in individual cases like this one?This question sits at the fault line between utilitarian and rights-based reasoning. Neither framework resolves it cleanly.
  7. Imagine that the algorithm's decision logic was published openly before the vehicle went to market. Would transparency change the moral status of the outcome? Does knowing the rules in advance constitute a form of social consent?Consider the difference between consent and mere notification. Consider also who, in practice, would actually read such a document.

A Complication Worth Sitting With

Research in behavioral ethics suggests that human drivers in genuine emergencies do not choose — they react. The moral credit or blame we assign to human decisions in crisis is complicated by the fact that those decisions are rarely deliberate. The autonomous vehicle, by contrast, is always deliberate in its outcomes, even if the deliberation happened years in advance.

This raises an uncomfortable question: are we holding the machine to a higher standard than we hold ourselves? And if so, is that the right standard — or does it simply reveal how poorly our moral frameworks were designed for agents that do not panic, do not freeze, and do not forget?

Through Different Lenses

The following perspectives do not resolve the dilemma. They illuminate different aspects of it.

Consequences and Welfare

A framework focused on outcomes asks which decision produces the best result for the greatest number. But it must also confront whose welfare counts, how to measure it, and whether aggregate benefit can justify individual sacrifice without that individual's consent.

Duties and Rights

A framework centered on rights asks whether any algorithm may treat a person purely as a means to an end without violating something fundamental about human dignity. It also asks whether the manufacturer assumed a duty of care toward the passenger that cannot simply be coded away.

Character and Integrity

A framework focused on character asks what kind of engineers, companies, and regulators we want to be — and what habits of moral reasoning we are cultivating when we outsource life-and-death decisions to software.

Care and Relationship

A framework grounded in care asks whether the abstraction required by algorithmic ethics — reducing people to variables — misses something morally essential about human vulnerability that cannot be captured by any formula.

Fairness and Agreement

A contractualist framework asks: would anyone, behind a veil of ignorance about whether they would be the passenger or a construction worker, agree to the algorithm as written?

Structural and Systemic

A broader lens asks who, in practice, can afford autonomous vehicles — and whose lives and neighborhoods are therefore most likely to become the data points in collision scenarios. Injustice may be embedded in the system long before the algorithm runs.

For Discussion or Written Reflection

The following prompts are designed for use in undergraduate or graduate seminars, or as written assignments.

✓ Copied to clipboard! Continue to companion dilemma: The Knock at the Door →
← Return to Moral Latitude