Designing for Trust: Why Explainability Alone Falls Short in AI

Trust in AI is not built solely through explanations. It is earned through consistency, relevance, and ethical design. Explainability helps, but trustworthiness must be designed system-wide, not assumed.

Designing for Trust: Why Explainability Alone Falls Short in AI
Explainability alone won’t carry the weight—designing for trust requires intention at every touchpoint.

As artificial intelligence becomes deeply embedded in the systems and decisions that shape modern life, one belief has gained widespread acceptance: that increasing explainability in AI systems inherently increases user trust. On the surface, this feels intuitive. After all, do we not trust what we understand?

But recent research, including a meta-analysis of 90 studies published in IEEE Transactions on Technology and Society (Wang et al., 2023)¹, challenges this assumption. While the analysis confirms a statistically significant link between explainability and trust, the effect size is weak (r = 0.194). In other words, explainability is only one of many factors that influence user trust, and it is far from the most decisive.

As someone leading product design at the intersection of real-time analytics and AI, I believe this insight should reshape our thinking about trust, design, and the role of AI in society. For enterprise products where users depend on high-stakes, data-informed decisions, designing for trust is not about surface-level clarity. It is about structural integrity, ethical intent, and alignment with human values.

The Limits of Explainability

There are certainly contexts in which explainability significantly improves trust, especially when it is tailored to the user's level of expertise, emotional state, and risk exposure. In high-stakes domains like healthcare and finance, even a well-crafted, non-technical rationale can instill meaningful confidence. The key is recognizing that not all explanations are equal, and not all users benefit from the same kind of explanation.

Despite this, much of the excitement around explainable AI (XAI) has focused on the simplistic idea that showing your work automatically builds trust. But the reality is more nuanced. As the research shows, explanation does not guarantee understanding, and understanding does not always translate to trust.

Consider this: people trust elevators daily without knowing how counterweights, traction systems, or safety governors work. What earns their trust is consistent performance, reliability, and the absence of surprises. Likewise, users often trust AI systems not because they understand how they work, but because the systems behave predictably and align with their existing knowledge and expectations. In contrast, explanations that are overly technical or poorly designed can confuse users, undermine trust, or even give a misleading sense of confidence.

Trust is not a function of information volume—it is a function of relevance, clarity, and alignment with user values. Too often, explanations are designed to expose internal workings, rather than meet the needs of the person relying on the output. This misalignment helps explain why explainability alone, even when statistically significant, has a limited impact on user trust.

The study highlights a critical distinction between Trust Empowerment and Trust Enforcement. The former respects the user’s agency by offering meaningful context that enables choice. The latter manipulates trust by selectively revealing information to instill compliance. Too often, design defaults toward the latter, especially in opaque enterprise tools and consumer-facing algorithms.

This pattern appears across industries. In fintech, model outputs are often framed as certainties, even when data quality is inconsistent or historically biased. In healthcare, a model might provide a rationale for a diagnosis, but if that rationale is steeped in technical jargon, it may fail to build confidence, or worse, obscure limitations. These failures are not theoretical; they are already playing out in production systems.

IBM Watson for Oncology was promoted as an AI system that could recommend cancer treatments. However, internal audits and partner reports revealed that Watson sometimes offered unsafe or clinically unsupported recommendations. When oncologists sought to understand the rationale, the system offered explanations that were either too generic or too technical to be useful. Instead of increasing trust, these explanations created confusion and skepticism—ultimately leading several hospitals to abandon the platform.

Trustworthiness vs. Trust

This distinction is central for designers. Trust is subjective. Trustworthiness is structural. We can influence trust, but our responsibility is to design for trustworthiness. That means:

  • Delivering predictable and repeatable system behavior
  • Being transparent about limitations and uncertainty, not just decisions
  • Enabling contestability and user feedback mechanisms
  • Accounting for ethical, social, and cultural contexts, not just UI flows

Importantly, trustworthiness is not the responsibility of a single feature or screen. It is the cumulative outcome of every interaction a user has with the product—and with the brand. A trustworthy model output means little if it is delivered through a confusing onboarding experience or followed by an unresponsive support team.

Designers must assess how trust is built or broken across the entire user journey. This includes visual design, copy tone, help documentation, customer service policies, privacy settings, and recovery mechanisms. Every detail matters. Every detail communicates values.

Trustworthy systems may not earn trust immediately. But they are designed to deserve it—and that matters. Just as we measure usability, performance, or accessibility, we can and should measure trustworthiness as well. Metrics such as system uptime, audit traceability, bias detection, explanation efficacy, and adherence to ethical frameworks help us quantify progress and spot gaps. Without metrics, trust becomes a matter of hope rather than a matter of intent.

The best AI products today do not simply ask for trust—they earn it. They embrace ambiguity. They acknowledge limitations. They expose trade-offs. They invite feedback. Most importantly, they make users feel seen.

What Designers and Product Leaders Should Do

If explainability alone is not sufficient, then what is our responsibility? Here are four shifts that product and design leaders should champion:

  1. Design for accountability, not just transparency. Build systems that users can question. Make it easy to escalate concerns or seek human oversight. Trust grows when users know they have agency.
  2. Prioritize actionable understanding over technical completeness. Focus on outcomes. Can the user make a confident decision? Can they anticipate what the system will do next?
  3. Choose interpretability over polish when it matters. Favor inherently understandable models in high-stakes domains, even if they are less complex. A glossy explanation for an opaque model does not make it trustworthy.
  4. Honor user context, identity, and emotion. UX does not live in isolation. It must take into account the user’s emotional state, professional pressures, and lived experiences. What reassures one persona may alienate another.

These shifts require cross-functional alignment. Product, design, engineering, marketing, and even legal must work together. They also benefit from proactive user education. Helping users build realistic mental models of AI through marketing messaging, onboarding, in-product demo guidance, or scenario-based walkthroughs can calibrate expectations and build confidence. Trust is not just built by the product—it is co-developed with the user.

Beyond Design: The Organizational Imperative

Explainability is often framed as a UX concern, but it is also a governance challenge. Organizations must operationalize trust through transparency, accountability, and process. Teams should be asking:

  • Who determines which explanations are surfaced—and why?
  • What mechanisms exist when users reject or question outputs?
  • How is uncertainty conveyed in urgent or irreversible decisions?
  • Where does automation end, and human responsibility begin?

These questions grow more urgent as generative models and autonomous systems enter critical domains. Explainability cannot bear the ethical weight of AI alone. Without thoughtful governance and intentional culture, we risk building persuasive but unprincipled systems.

And trust is not uniform. It is shaped by culture, regulation, and lived experience. What reassures a U.S.-based product manager may confuse or alienate a Japanese hospital administrator or an Indian government official. Localization is not just about language—it is about aligning expectations, values, and social norms. Designing for trust means designing for context.

Final Thought: Designing for Informed Trust

We must move beyond the equation that "transparency equals trust" and embrace a richer understanding: trust is co-created. It is a relationship. It is dynamic. It is earned, sometimes lost, and always contextual.

Designing for AI or integrating AI into existing platforms is no longer just about usability. It is about accountability, alignment, and impact. It's about designing systems that deserve to be trusted and that feel persuasive. The research on explainability provides a vital lesson: trust cannot be manufactured with a clever tooltip or a confidence score. It must be earned through principled behavior, honest communication, and deeply intentional design.

As we build the next generation of AI products—from finance to healthcare to public infrastructure—we must rethink trust not as a checkbox or KPI, but as a guiding principle. That begins by recognizing that explainability is necessary, but not sufficient.

We must move beyond the equation that "transparency equals trust" and embrace a richer understanding: trust is co-created. It is a relationship. It is dynamic. It is earned, sometimes lost, and always contextual. As designers, we do not get to decide whether users trust our systems...But we are responsible for ensuring those systems are worthy of it.


¹Wang, D., Yang, Q., Abdul, A., & Lim, B. Y. (2023). A Meta-Analysis on the Relationship Between Explainability and Trust in AI. IEEE Transactions on Technology and Society, 4(2), 97–113. https://ieeexplore.ieee.org/document/10964393

Subscribe to Goldfoot

Sign up to get access to the library of members-only articles.
jamie@example.com
Subscribe