Why Transparency Works and Authority Fights It

The story begins, as so many stories do these days, with something that seemed purely technical and ended somewhere much darker. Last week, Anthropic announced a connector for Claude, its artificial intelligence system, giving the technology structured access to more than half a million federally registered clinical trials. The announcement arrived with little fanfare in the specialized corners of LinkedIn where medical researchers and pharmaceutical strategists gather. Within hours, the comment thread had swelled to 140 responses, a constellation of cautious optimism, technical quibbling, and the occasional warning about data quality that anyone who has worked with government registries knows by heart.

Then, within days, immigration enforcement agents opened fire during operations in multiple cities. The shootings were not connected to artificial intelligence or clinical trials, at least not in any direct sense. And yet Scott Galloway, the marketing professor and podcaster who has built a following by saying uncomfortable things in uncomfortable ways, drew a line between them anyway. He argued that power does not respond to moral appeals, only to credible consequences. If you want to stop federal agents from shooting people, he suggested, stop using ChatGPT. The financial pain would radiate downstream to Meta, Nvidia, Microsoft. The message would be delivered not through protest but through spreadsheets.

The juxtaposition is jarring, which is precisely the point. One story is about making information more legible, more accessible, more subject to scrutiny. The other is about violence that persists because it remains largely invisible, insulated from accountability, operating in the shadows of discretion rather than the light of continuous review. They are separated by subject matter and tone, but they share a common architecture, a question about whether institutions are designed to explain themselves or to protect themselves from explanation.

What Galloway understands, and what the researchers responding to the clinical trials announcement seemed to understand instinctively, is that artificial intelligence becomes politically meaningful not when it predicts or persuades, but when it constrains. When it forces systems to show their work. When it transforms isolated incidents into visible trajectories. When it makes denial harder to sustain because the pattern is undeniable and the data is there for anyone to interrogate.

Medicine learned this lesson decades ago, not because it wanted to but because it had no choice. Clinical research was once governed almost entirely by professional discretion. Protocols were opaque. Failures were buried. Abuse flourished in the shadows of expertise. Public trust collapsed when those shadows were exposed, and the response was not to abolish expertise but to surround it with structure. Trial registries became mandatory. Independent review boards were established. Audit trails preserved history rather than erasing it. These reforms were contested every step of the way, dismissed as bureaucratic overreach, as interference with scientific judgment, as solutions in search of problems. They were imperfect then and remain imperfect now, but they transformed the system in one essential way. They made it legible.

The recent integration of AI with ClinicalTrials.gov does not represent a radical leap forward in intelligence. The data already existed. What changed is who can interrogate it, how quickly patterns can be surfaced, and how difficult it becomes to hide structural failures behind complexity. Crucially, the researchers reacting to this development were clear about its limits. AI can search, compare, and synthesize at a scale no human can match. It cannot be allowed to decide what is ethical, safe, or scientifically valid. Judgment remains human. Accountability remains human. The machine serves the process, not the other way around.

That distinction, between assistance and authority, is the entire point. And it is the distinction that public enforcement agencies have rejected almost entirely. Use-of-force data is fragmented, delayed, often inaccessible. Oversight is episodic rather than continuous. Reviews occur after harm, not before patterns escalate. External scrutiny is treated as interference rather than as a design requirement. This is not because the technology is unavailable. It is because visibility constrains discretion.

Once actions are recorded in real time, once patterns can be compared across jurisdictions, once explanations must be offered continuously rather than defensively, behavior changes upstream. Decisions slow down. Alternatives are considered. Risk is distributed more carefully. Institutions that rely on speed and authority resist these changes instinctively. They frame transparency as a threat rather than as a safeguard. In doing so, they guarantee that each crisis will look indistinguishable from the last.

Much of the current discourse around responsible AI avoids this confrontation. It treats ethics as a matter of intention rather than architecture. It asks institutions to police themselves while continuing to reward opacity. That approach fails every time. The researchers discussing AI in clinical trials understood this instinctively. They demanded guardrails. They rejected authorship without accountability. They recognized that assistance is not authority. Politics has not yet caught up to that level of institutional maturity.

If we are serious about reducing state violence, about restoring public trust, about governing power rather than reacting to it, then the path forward is neither mysterious nor utopian. It looks like structured transparency. Continuous oversight. Independent review triggered by data rather than outrage. Consequences that activate automatically rather than rhetorically. AI can make these systems feasible at scale. It cannot make them palatable to those who benefit from the status quo. That resistance is not a bug. It is a diagnostic.

The central lesson from medicine, from markets, and from democracy itself is painfully consistent. Systems improve when they are forced to see themselves clearly. Violence persists where systems remain invisible. Power abuses itself when no one is watching closely enough. Artificial intelligence will not save us from these truths. But used with purpose, it can remove our last remaining excuse. We already know how to build institutions that learn instead of lash out. The question is whether we are willing to demand that they do so.

Leave a Reply