Trust, the Second Moat: When Information Gets Cheap, Trust Gets Expensive

There is a man named Raymond who does tile and fixture work out of a white van with his name and number on the door. He has been at it for twenty-two years. He answers his own phone. He is not the cheapest contractor in the county and he is not the most expensive, and he gets most of his work from word of mouth, which is a gentler way of saying he gets most of his work from trust. Last spring he was hired by a couple in the western suburbs to renovate their bathroom. They had been burned before. Their previous contractor had taken half a deposit, installed a shower pan incorrectly, and stopped returning calls in November. They hired Raymond the way you approach a stray dog: carefully, ready to pull back.

He showed up at seven-thirty on the first morning, as he had said he would. He took off his boots at the door. He worked until three in the afternoon, as he had said he would. Before he left, he showed them what he had completed, what he had found behind the wall that he had not expected, and what it was going to cost. He answered their questions without looking at the door. He wrote the revised estimate and did not change it the next morning. He did this every day for two weeks. On the last day, as he was loading the van, the husband said to his wife, “I think we should call him about the kitchen.” She agreed. Not because the tile work was flawless. Because every small promise had been kept.

What we are watching in that driveway is not a man demonstrating virtue. We are watching a man building infrastructure. One kept promise at a time, one accurate estimate at a time, one question answered without deflecting, he has constructed something that is nearly impossible to replicate quickly and nearly impossible to destroy slowly. He has built a trust moat. And in a world that is filling rapidly with generated content, automated recommendations, and machine-made decisions, what Raymond has built is becoming one of the scarcest and most valuable things in American economic life.


Last week I introduced the first moat: judgment, the capacity to make and own decisions when information is incomplete, conflicting, or too contextual to be resolved by rules alone. The night charge nurse called rapid response despite the green score on the dashboard. The long-haul driver refused the route despite the optimizer. The teacher closed the adaptive learning platform and walked a child to the school counselor. These were acts of judgment. They were also something else.

The resident picked up the phone because the nurse told him to. He picked it up because he trusted her. Judgment produced the call. Trust made it land.

This is why trust is judgment’s nearest neighbor in the structure of the moats. Judgment is the act of deciding under uncertainty. Trust is the condition that determines whether anyone acts on what you decided. A nurse with perfect clinical judgment and zero institutional trust is a nurse whose calls go to voicemail. A contractor with perfect tilework and a reputation for disappearing gets no referrals. A public health agency with accurate data and a broken relationship with its communities gets ignored in exactly the moment it matters most. Judgment moves a person to act. Trust moves other people to follow.

We need a working definition, because the word has softened through overuse until it has become almost decorative. Trust, in the sense that matters here, is confidence under conditions of vulnerability. To trust is to make yourself open to harm, to allow another person, institution, or system to affect outcomes you care about, in the belief that they will act competently, honestly, and in ways you can predict.

Notice what trust is not. Trust is not belief. You can believe a claim is probably true without trusting the person who made it. Trust is not confidence, which is a probability assessment that can be computed without any relationship at all. Trust is not familiarity. You can know someone deeply and still not trust them. Trust is not compliance, which is behavioral conformity to rules or incentives and requires no trust whatsoever. Trust is not brand recognition, which is awareness, and awareness is not the same thing as faith. And trust is not blind faith, which is the absence of judgment. Blind faith is not a form of trust. It is a failure of it.

Trust always involves risk. You cannot trust what cannot disappoint you. This is why trust cannot be fully manufactured. A machine can process your medical history, but it cannot accept moral responsibility for what it recommends. A platform can learn your preferences, but it cannot be held accountable in the relational sense that makes accountability meaningful. A model can answer your question and answer it correctly, but it cannot be the kind of entity whose answer you trust with your life, because trust is not a function of accuracy. It is a function of relationship, history, and the willingness to be responsible for what happens next.


Here is the paradox that sits at the center of this essay, and it is as counterintuitive as the judgment paradox from last week. The better machines get, the more valuable trust becomes. Not despite the machines. Because of them.

The mechanism runs in three directions at once.

The first is abundance. When information was expensive to produce, the difficulty of production served as a rough filter on credibility. A published book, a broadcast report, a professional credential, a signed contract, all required enough investment that they conferred at least a minimum of legitimacy. That filter is gone. Content is now nearly free to produce, and the result is not abundance. It is chaotic. In 2026, surveys found that nearly half of Americans say they no longer trust much of what they encounter online, and two-thirds report exhaustion from trying to verify the sources behind AI-generated content. When content is free, credibility becomes the scarce resource. When everything can be said, the question of who is worth listening to becomes the only question that matters.

The second mechanism is synthetic fluency. AI systems can now produce text, images, audio, and video that are functionally indistinguishable from human-created work. This generates what researchers have called the liar’s dividend: the ability to dismiss any authentic evidence as a deepfake. Studies of synthetic political video find that exposure to deepfakes makes people more uncertain about all news, not only the manipulated content. A spillover effect reduces trust in public information even when the content in question is entirely real. When anything can be faked, who you believe becomes the most consequential decision you make. Technology has not destroyed trust. It has raised its price.

The third mechanism is the accountability gap. As institutions delegate decisions to algorithms, the question of who is responsible for those decisions becomes urgent and, increasingly, unanswerable. The algorithm recommended the denial. The platform flagged the account. The risk engine scored the application. The routing system sent the driver into a neighborhood she knew was wrong. In each case, there is a decision and an outcome but no person who owns it. Principal-agent theory applied to AI governance identifies the central problem clearly: assessability, can the decision be understood; dependency, can the delegation be reversed; and contestability, can the decision be challenged. When AI systems operate without these properties, institutional trust and perceived control decline together, often sharply and at the same time.

Add these mechanisms together and the picture is plain. AI massively increases the supply of information, analysis, recommendation, and generated content. Trust, the condition that determines whether people believe, use, share, or act on those outputs, does not increase automatically. It may decrease, under structural pressures we will get to shortly. The result is a price signal that any economist would recognize. Trust becomes expensive in exactly the proportion that information becomes cheap.


I want to spend most of the time here on the domains where this matters, because the existing conversation about trust and technology focuses almost exclusively on the concerns of people who are already trusted before they walk into a room. The moat belongs to everyone.

Consider the home health aide. She arrives at seven in the morning with a key. The family has given her that key not because she passed a background check, though she did, but because of something the background check cannot capture: the way she is with their mother, the way she speaks her mother’s name and not just calls her ma’am, the way she calls on the days she is not scheduled, the way she remembered the name of a late husband after hearing it once. The family did not interview a service. They interviewed her. The trust is personal, particular, and irreplaceable. No AI scheduling platform, however sophisticated, can reassign it to a different provider without beginning again from zero. The moat is hers, and it took years to build.

Consider the pharmacist at an independent drugstore who has worked the same corner for eleven years. People in that neighborhood do not merely fill prescriptions there. They stop and ask what they should actually do. They describe symptoms they have not mentioned to their doctor. They bring in the pill bottles of an aging parent and ask for help making sense of what they are looking at. She remembers what they take. She remembers their grandchildren by name. They do this because they trust her, and they trust her because she has been there, consistent and honest, through enough of their lives to have earned it. A mail-order pharmacy can fill the same prescriptions at lower cost. It cannot do what she does. The moat is the relationship, and the relationship is eleven years.

Consider the teacher who has worked in the same district for twenty years, who knows not only the students but their older siblings, their parents, the family dynamics that show up in the classroom whether anyone mentions them or not. When she calls a parent, the parent answers differently than they would answer a stranger. When she says she is worried about a child, that statement carries a weight that no AI behavioral flag can replicate, because the flag represents a data pattern and she represents a relationship. The adaptive learning platform can surface the anomaly. Only she can know what it means.

Now move up the income scale, because the structure is identical even when the salaries are different. The physician navigating AI diagnostic tools with a patient who arrived frightened and skeptical has a clinical problem and a trust problem at the same time. Studies find that roughly half of patients would still choose a human physician over an AI for diagnosis and treatment. This is not irrationality. It is a reasonable assessment of what trust requires. Picture the oncologist sitting across from her patient with the scan on the screen between them. The model has produced its number. She has known this man for six years. She knows that he drove three hours to get here and that his wife died of the same disease. She looks at the scan, and she looks at him, and she says, here is what I know, here is what I do not, and here is what I think we should do next. The AI can be right more often than the physician on certain tasks. But the patient is not only trying to get a correct diagnosis. He is trying to navigate a frightening passage in the company of someone who will share the weight of the answer. That is a different transaction, and it requires a different kind of presence.

The attorney who has worked with a family through two generations of estate disputes has something that a legal AI tool does not have and cannot acquire: she has been present for the moments that matter. She has watched the family change. She has held confidence that cannot be shared. She has made judgment calls that turned out to be right, and a few that turned out to be wrong, and the family has watched how she handled both. That accumulated record is the trust moat. It does not transfer to a newer, faster system.


Trusted institutions work exactly like trusted individuals, only at scale and with the same structural requirements. The organizations with the widest trust moats are not the ones with the most sophisticated technology stacks or the most polished communications. They are the ones with the longest records of doing what they said they would do, correcting their errors openly, and treating the people who depend on them as people whose wellbeing matters.

The business case is not subtle. Research consistently finds that high-trust organizations outperform low-trust organizations by wide margins in total return to shareholders, in employee retention, in the speed of decision-making, and in recovery from crisis. The reason is not mysterious. Trust is what allows a society, a company, a hospital, or a team to move before every fact is independently verified. A system that must verify everything trusts nothing, and spends enormous resources on transaction costs that high-trust systems redirect toward productive work. Stephen Covey put it directly: distrust effectively doubles the cost of doing business. In the AI era, when the volume of automated interactions is scaling rapidly, that cost scales with it.

The failure modes run in two directions, and both matter.

Under-trust is lethal in ways that are easy to document but hard to repair. The Tuskegee syphilis study was not revealed until 1972. NBER research by Marcella Alsan and Marianne Wanamaker found that the revelations produced a deep and lasting mistrust of the medical system among Black men, with measurable reductions in care-seeking and a widening of the life expectancy gap that persisted for decades. Distrust did not kill those men directly. It kept them away from the institutions that might have helped, and it did so for reasons that were historically and rationally justified. They had been taught what they were taught at the cost of their fathers’ lives. The 2021 to 2023 data on COVID vaccine hesitancy shows the same structure. Mistrust in public health institutions was a stronger predictor of hesitancy than partisan political identity. The vaccines were safe. The institutions offering them were not trusted. Both facts were true simultaneously, and only one of them could be fixed quickly.

Over-trust is its own kind of failure. Theranos was not destroyed by bad technology alone. It was destroyed by misplaced trust: a board filled with influential figures who lacked the scientific expertise to evaluate the core claims, investors who deferred to Elizabeth Holmes’s vision rather than demanding independent verification, a culture in which trust in the founder displaced the obligation to verify. The governance failure was not a lack of information. It was an excess of uncalibrated trust in the absence of accountability. Enron followed the same architecture, and so have most of the major institutional collapses of the last thirty years. They did not fail because no one had doubts. They failed because the culture did not protect the people who had doubts from the people who had power.

Automation bias is the same failure applied to machines. Under time pressure and cognitive load, humans defer to automated systems even when their own judgment would have been more accurate. The EU AI Act explicitly recognizes this risk. The fraud-review floor I described last week is the anatomy of automation bias at scale: agents who see a pattern the model has missed, who have a supervisor who has told them to trust the tool and clear the queue, and who clear the queue. Everyone follows the process. Something goes catastrophically wrong. The two facts coexist because no human was empowered, or protected, for saying what they saw.


What does a trusted person actually look like, at any level of work or income?

The most reliable signal is the willingness to admit the limits of what they know. Overclaiming is a trust-destroying behavior. The AI era has produced machines that hallucinate with fluency and confidence, that produce wrong answers in the cadence and register of correct ones. Against this background, the human or institution that says we do not know yet, or I was wrong about this, or the answer depends on things I cannot see from here, becomes distinctively credible. Calibrated honesty is not a soft virtue. It is a competitive asset.

The second signal is consistency under pressure. Trust is most tested, and most built, at moments of stress. The contractor who shows up the morning after a difficult conversation about unexpected costs. The physician who calls back with news that is not good. The supervisor who tells a worker the truth about a performance problem rather than letting it fester until the layoff. Leaders who maintain their stated values when it is costly to do so build the deepest credibility. Those who abandon their values under pressure do not simply lose trust in the moment. They retroactively undermine all trust that came before.

The third signal is transparent correction. Research on post-crisis trust repair finds consistently that organizations which acknowledge failure, investigate its causes, and make credible commitments to change recover more effectively than those that deny or minimize. This is not counterintuitive once you understand what trust actually is. Trust is not the absence of failure. Trust is the condition that allows people to believe that failure will be handled honestly. Organizations that handle failure honestly are confirming the trust that was placed in them. Organizations that cover up failure are spending it.

The fourth and least glamorous signal is reliability in small commitments. Large promises cost nothing. Raymond the contractor did not tell the couple his work would be perfect. He told them what time he would arrive and what it would cost, and he was right both times. Repeatedly. The 2026 Edelman Trust Barometer found something that is worth sitting with: even as trust in major institutions collapsed across its 28-country sample, trust in neighbors, family, friends, and immediate colleagues was growing. The explanation is not sentiment. It is proximity. Proximate relationships are where small-commitment reliability can actually be observed. When you cannot watch the large institutions, you watch the people you can see.


Trust is not distributed evenly. This is not an accident. It is a feature of social hierarchy, and it matters enormously to whether AI systems make things better or worse.

Research across multiple disciplines establishes a consistent pattern: high-status individuals receive trust before providing evidence, while lower-status individuals are required to provide evidence before receiving trust. Behavioral experiments find that across multiple studies, participants trusted high-socioeconomic-status partners significantly more, based on nothing more than appearance. The philosopher Miranda Fricker named the mechanism testimonial injustice: the systematic deflation of a speaker’s credibility owing to identity prejudice, the condition that makes marginalized speakers unable to share what they know because their testimony is not received as credible. The cost of that asymmetry is borne, every day, by people who do good work without the benefit of the doubt that other people receive for free.

This matters for AI systems because AI systems trained on historical data inherit historical trust hierarchies. The COMPAS algorithm’s disparate false positive rates for Black defendants, 45% for those who did not subsequently reoffend, versus 23% for white defendants who did not reoffend, are not a bug introduced by the algorithm. They reflect the patterns of policing, charging, and prosecution already embedded in the data the algorithm was trained to predict. When an AI system is deployed as a neutral tool, it often functions as an amplifier of existing credibility inequities. The machine does not invent the injustice. It scales it.

This is the trust question that most business conversations about AI decline to ask directly: who, in your system, is trusted before they produce evidence, and who must produce evidence before they are trusted? And what does your AI system do to that asymmetry? Make it smaller, or larger, or simply faster?

The most trusted institutions in the AI era will be the ones that can answer that question honestly, and that have built systems designed to correct for it rather than exploit it. The least trusted will be the ones that adopted the language of algorithmic fairness while automating the outcomes they already preferred.


Several structural forces are actively consuming the trust they depend on, usually in the name of efficiency.

Workplace surveillance is the most visible. Research on electronic monitoring finds that it has positive effects on narrow task performance metrics but damages organizational trust. Studies of warehouse workers found that excessive monitoring increases stress, lowers morale, and shifts workers’ focus from doing good work to avoiding punishment. Only about half of employees report trusting their companies, and that number is worsened by the surveillance management model that has expanded with AI tools. The insight is simple. Where a trusted professional is given discretion, a surveilled worker is monitored for compliance. The signal sent by surveillance is that the worker cannot be trusted. The worker receives that signal and acts accordingly. The institution gets exactly the relationship it built.

Speed without verification is another form of trust consumption. The pressure to publish, deploy, and scale before adequate testing is a systematic mechanism for spending trust that took years to accumulate. When speed is the primary metric and trust is the implicit resource being spent to buy time, the result is a predictable sequence: rapid adoption, hidden failures, public exposure, collapse of confidence. AI-driven content platforms, algorithmic credit scoring systems, early medical AI tools, and automated public benefits systems have all followed this pattern in recent years. The efficiency gain was real and short. The trust loss has been real and long.

Institutional opacity does the same work more quietly. When institutions conceal their decision-making processes, whether to protect proprietary methods, limit liability, or avoid accountability, they create the conditions for rational distrust. People who cannot understand how a decision affecting them was made cannot evaluate whether it was fair. People who cannot evaluate fairness cannot meaningfully consent. And people who cannot consent become, over time, people who do not comply.

The core tradeoff is not subtle. Systems designed only for efficiency often consume the trust they depend on. Optimizing a transaction is not the same as cultivating a relationship. Organizations that conflate the two eventually find they have traded a durable asset for a temporary gain.


The age of AI will not eliminate trust. The evidence points firmly in the other direction. It will make trust one of the few remaining forms of durable advantage, for individuals, for institutions, for communities, and for the civic structures that hold a democratic society together.

For individuals, the implication is specific. The capacities that will matter most are not the ones easily listed on an application. They are the habits that build observable records over time: showing up when you said you would, saying what you do not know, correcting mistakes before someone else finds them, treating the people who depend on you as people whose outcomes matter to you. These are equally available to the unit secretary and the surgeon, the mechanic and the CEO. They do not require a credential. They require a consistent decision, repeated across enough time to become something people can rely on.

For institutions, the implication is harder. Most of what we have built rewards the behaviors that consume trust: surveillance over relationship, speed over verification, opacity over accountability, conformity over judgment. The institutions that thrive in the AI era will be the ones that treat trust as capital, which means managing it the way capital is managed: carefully, with awareness of how it accumulates and how it depletes, with genuine alarm when the reserves are running low.

Renee, the unit secretary from last week, is the protagonist here as she was there. She has nineteen years of demonstrated reliability in a role that the scheduling platform is now beginning to circle. What she has built, which the platform does not have and cannot build quickly, is the trust of every physician, family member, and patient who has ever needed something from that floor and found her there. She knows the residents by their first names and by the year they arrived. She keeps a tin of butterscotch in the bottom drawer for the families who have been sitting too long in the waiting room. She knows who to call when the system produces a result that will hurt someone. She knows who can be reached at what hours, which conversations to have in the hallway rather than over the phone, which families are fragile this week. None of this is in the data. All of it is decisive.

The machine can answer the question. Trust decides whether anyone acts on the answer. Raymond knows this. So does Renee. The unit secretary and the tile man, working their different floors, are practicing the same discipline: the daily accumulation of evidence that they are worth relying on.

That evidence is the moat. And in a world that can manufacture nearly everything else, it is one of the things that still cannot be faked.

Next up: Craft, the commitment to doing something well beyond what is required, and why it is one of the few things that gets more valuable the more machines raise the floor.

Leave a Reply