Consider an ordinary person. Not a criminal. Not a political dissident. Not a person under investigation for any articulable reason. Someone who goes to work, pays taxes, attends a place of worship, has opinions about politics, and lives a life that is, by any reasonable measure, entirely unremarkable from a law enforcement perspective.
Now consider what exists about this person in publicly available, legally accessible data. Their location history — inferred from their phone’s GPS and purchased from a data broker who aggregated it from a navigation app. Their browsing history — purchased from an internet service provider or reconstructed from ad-tracking data. Their purchase records — from retail loyalty programs and credit card transactions. Their social connections — inferred from call metadata, email headers, and social platform interactions, none of which required a warrant to obtain. Their health patterns — inferrable from pharmacy records, fitness app data, and insurance claims. Their political views — reconstructable from the publications they read, the events they attended, the donations they made, and the petitions they signed.
No single one of these data points is secret. Most were shared voluntarily, if without full awareness of the consequences. Each was obtained through channels that are, in the current legal framework, entirely legitimate. And yet — assembled by an AI system capable of finding patterns, drawing inferences, and constructing coherent profiles across disparate datasets — they yield something that feels profoundly like a violation: a comprehensive picture of who this person is, what they believe, who they associate with, and how they live their life, assembled without their knowledge and available to the state on demand.
At what point, if any, did something go wrong? And whose responsibility is it?
When every piece of data about a person was obtained legitimately, but their assembly produces something no person would have consented to — a comprehensive portrait of their inner and outer life — has a privacy violation occurred? And if so, how do we think about it?
The Mosaic in Practice
The thought experiment above is not hypothetical. It describes capabilities that exist now. Intelligence analysts have a name for this pattern: the mosaic effect — the phenomenon by which individually innocuous pieces of information combine to reveal something the subject would regard as deeply private. AI dramatically accelerates the assembly of the mosaic and extends it from specific persons of interest to entire populations.
A comprehensive behavioral profile: where you go, what you believe, who you love, what you fear, what you’re planning, and what you might do next. Assembled automatically, at scale, without your knowledge, and without a warrant having been obtained for any of it.
What AI Adds: Scale, Speed, and the End of Practical Obscurity
The collection and analysis of information about citizens by governments is not new. Surveillance has existed as long as states have. What has historically limited it is not law alone, but practical constraint: the sheer labor involved in gathering, correlating, and analyzing information about many people simultaneously. This practical limitation — sometimes called practical obscurity — meant that even when individual data points were accessible, the comprehensive portrait they could assemble was not. The cost of producing it was prohibitive except in cases where the investment was clearly warranted.
AI eliminates practical obscurity. What once required a team of analysts working for weeks can now be accomplished in seconds for any individual in a dataset — and the dataset can contain everyone. The moral transformation this represents is not merely a change in degree. It is a change in kind. A world in which comprehensive surveillance of any citizen is possible at negligible cost is a fundamentally different kind of world from one in which it requires substantial investment and therefore functions as a de facto deterrent against arbitrary use.
Anthropic named this explicitly in its public statement: under current law, the government can purchase detailed records of Americans’ movements, browsing history, and social associations from commercial sources without obtaining a warrant. Powerful AI makes it possible to assemble this scattered data into a comprehensive picture of any person’s life automatically and at massive scale. The law has not yet caught up with what the technology makes possible.
Contextual Integrity and the Privacy of Aggregation
The standard legal framework for privacy asks whether information was secret, and whether it was disclosed without consent. By this standard, the assembly of a comprehensive profile from publicly available data raises no privacy concern: none of the data was secret, and each piece was disclosed under terms the subject agreed to.
Philosopher Helen Nissenbaum offers a more useful framework: contextual integrity. Her argument is that privacy is not primarily about secrecy — it is about appropriate information flows. Information flows appropriately when they match the norms of the context in which the information was originally shared. Information flows inappropriately when it is used in ways that violate those contextual norms, even if no secret is disclosed and no explicit agreement is broken.
When you share your location with a navigation app, you are sharing it in a specific context — to receive directions — under norms that do not include government surveillance. When that data is purchased by an intelligence agency and combined with your browsing history, purchase records, and social connections to construct a behavioral profile, something has happened that violates the contextual integrity of every one of those original disclosures, even though each was individually legitimate. The violation is not in any single data point. It is in the use to which the aggregate is put.
This framework captures what the secrecy-based account misses: the moral wrong in AI-powered mass surveillance is not that secrets were stolen. It is that a comprehensive picture of a person’s life was assembled — one they never consented to create, never intended to make available, and whose existence fundamentally alters their relationship to the state — from pieces of information that were each shared in entirely different contexts for entirely different purposes.
The Chilling Effect and the Character of a Free Society
Mass surveillance does not only harm those who are surveilled. It harms everyone who knows it is possible. The chilling effect — the inhibition of lawful behavior, political expression, religious practice, and personal association that results from awareness of being watched — does not require that anything go wrong for any particular person. It operates as a structural condition on everyone’s freedom.
A person who knows that their attendance at a political meeting, their choice of reading material, their religious practice, and their social connections are all being continuously recorded and available for analysis by the state is not the same kind of free person as one who lives without that knowledge. The difference is not captured by whether the surveillance leads to any particular harm. It is a difference in the character of the society itself — in what kind of freedom is available to its members.
This is why the debate about mass surveillance cannot be resolved by pointing to safeguards, oversight mechanisms, or the good intentions of current administrators. The moral concern is not only what this administration will do with the capability. It is what the existence of the capability does to the condition of freedom for everyone, now and in the future, under any administration.
A Complication Worth Sitting With
The strongest response to this case study is consequentialist: what if mass surveillance, properly constrained and overseen, actually makes people safer? What if the patterns it can detect prevent terrorist attacks, protect vulnerable populations, or identify threats that would otherwise go unnoticed? If the benefits are real and substantial, does the privacy cost become acceptable — especially if the data was obtained legally and the analysis is subject to meaningful oversight?
This is a serious argument, not a bad-faith one. It deserves a serious response rather than dismissal. The deontological response is that some rights are not subject to cost-benefit analysis — that the freedom to live without comprehensive state surveillance of one’s inner and outer life is not a preference to be weighed against security outcomes but a condition of personhood and political liberty that cannot be traded away even for genuine benefits. The liberal political tradition has generally held that the burden of proof falls on those who would restrict fundamental freedoms, not on those who would preserve them.
But the consequentialist challenge does not disappear. Students should work through both positions with the seriousness they deserve, rather than assuming the answer is obvious in either direction.
What Is at Stake, and for Whom?
Ordinary Citizens
Everyone whose data exists in commercial and public databases — which is to say, effectively everyone. The person targeted by this system does not need to have done anything wrong to have their comprehensive profile assembled and available.
Activists and Dissidents
Those whose political activities, religious practices, or associations bring them into the orbit of state interest face a qualitatively different risk than ordinary citizens — one that the history of government surveillance suggests is neither hypothetical nor remote.
Journalists and Lawyers
Professions whose effective functioning depends on confidential communication and source protection face particular threats from a surveillance architecture that can reconstruct those connections from metadata without ever accessing content.
Data Brokers and Technology Companies
The commercial ecosystem that makes this surveillance possible — collecting, aggregating, and selling personal data — exists because it is profitable, not because it was designed as a surveillance infrastructure. But the infrastructure it creates is one.
Future Administrations
Surveillance capabilities built under one administration are inherited by the next. The question of what today’s government will do with these tools cannot be separated from the question of what any future government might do with them.
Democratic Norms
The relationship between state and citizen that defines liberal democracy assumes a zone of private life — thought, association, belief — that the state does not enter without specific justification. Mass surveillance eliminates that zone structurally.
Questions for Inquiry
- The data in our thought experiment was obtained legally, shared voluntarily (if without full awareness), and no single disclosure was a violation of any law or agreement. Has a privacy violation occurred? If so, at what point — and what triggered it? Consider whether the contextual integrity framework changes your answer compared to a secrecy-based account of privacy.
- Practical obscurity — the de facto privacy that results from the sheer cost of comprehensive surveillance — was never a right, never written into any law, and was always vulnerable to sufficient investment. Does its elimination by AI represent a new moral problem, or merely the removal of a practical limitation that was never morally significant in itself? Consider whether protections that exist only because they are expensive to defeat are morally meaningful, or whether what matters is the principle rather than the practical constraint.
- Anthropic argued that mass surveillance of Americans violates fundamental rights, even where it is currently legal, because the law has not yet caught up with what AI makes possible. Is this a coherent position — that something can be legal and nonetheless a rights violation? What theory of rights does it presuppose? Consider the relationship between legal rights, moral rights, and natural rights in the traditions covered in the Principled Moral Reasoning framework page.
- The chilling effect describes how mass surveillance restricts freedom not by punishing anyone directly but by changing the conditions under which everyone acts. Is an unfreedom that operates through self-censorship rather than external compulsion a genuine harm — or does the absence of direct coercion mean that no rights have been violated? Consider whether freedom requires not just the absence of legal prohibition but also the absence of conditions that make the exercise of freedom practically costly.
- A consequentialist argument for mass surveillance holds that the safety benefits may outweigh the privacy costs, especially if robust oversight is in place. Construct the strongest version of this argument. Then construct the strongest deontological response. What would each position have to concede to take the other seriously? Consider whether there is a version of this tradeoff that both frameworks could accept, or whether the disagreement is ultimately irresolvable within the terms of either framework alone.
- The Pentagon argued that the uses Anthropic prohibited are already illegal, making the contractual carveouts unnecessary. Legal scholars noted that the refusal to accept those carveouts raises the inference that the prohibited uses are intended. Evaluate the ethics of a party that claims it does not intend to do something but refuses to commit to not doing it. Consider what a refusal to make a commitment that costs nothing reveals about the nature of the underlying intention.
Through Different Lenses
A deontological framework asks whether persons have a right to a zone of private life that the state may not enter without specific justification — and whether that right is violated by comprehensive surveillance even in the absence of direct harm to any individual. It also asks whether using citizens’ data to profile them without their knowledge treats them as ends or merely as means.
A consequentialist framework must weigh genuine security benefits against the real costs of chilling effects on political expression, religious practice, and civil society — costs that are diffuse, hard to measure, and accrue to everyone rather than being concentrated in identifiable victims.
A care ethics framework attends to the asymmetry of power between state and citizen in surveillance relationships, and to the particular vulnerability of those — activists, journalists, immigrants, religious minorities — whose relationships with state power are already characterized by precarity rather than trust.
A structural lens asks how commercial data collection came to create the infrastructure for state surveillance as a byproduct of consumer capitalism, and who bears the costs of a system that was built to serve commercial interests but whose externalities fall on everyone.