Trust calibration is the principle that users should trust an automated system when it is likely to be correct and question it when it might be wrong. The goal is not to maximise trust (which produces automation bias and unchecked acceptance) or minimise it (which produces under-reliance on genuinely helpful automation) but to match trust to actual system reliability.
Calibrated trust requires users to:
- Understand the system's strengths and weaknesses
- Recognise when it is operating within its competence
- Detect when it may be wrong
- Override it appropriately when conditions demand
Interface design supports trust calibration by:
- Communicating uncertainty — showing confidence scores or probability ranges
- Providing evidence — showing data or reasoning behind recommendations
- Highlighting disagreement — flagging cases where the system conflicts with guidelines or user input
- Tracking accuracy — showing historical performance so users can develop informed expectations
- Explaining limitations — making clear what the system cannot do
- Distinguishing deterministic from non-deterministic components — so users know which elements will behave consistently
The concept is central to the design of AI interfaces, clinical decision support, automated trading systems, and autonomous vehicles. It is a response to the twin failures of automation bias (over-trust) and algorithm aversion (under-trust), both of which lead to poor outcomes.
Trust calibration is arguably the most important usability principle for interfaces to AI systems, where the user must continually decide whether to accept, modify, or override the system's suggestions.
Related terms: Automation Bias, Clinical Decision Support, Explainability
Discussed in:
- Chapter 19: AI and Usability — The Usability of AI Systems
Also defined in: Textbook of Usability