Glossary

Automation Bias

Automation bias is the cognitive tendency to accept recommendations from automated systems (computers, algorithms, AI) without sufficient scrutiny, even when the recommendations are incorrect. It is the over-trust failure mode of human-automation interaction, extensively documented in aviation (pilots accepting incorrect autopilot actions) and healthcare (clinicians accepting incorrect CPOE dose recommendations).

The mechanism combines several factors:

  • Cognitive offloading — once a task is automated, users disengage from detailed monitoring
  • Authority bias — automated systems are perceived as more objective and competent
  • Effort reduction — questioning the automation requires extra effort users are reluctant to expend
  • Workload pressure — in busy contexts, accepting automation is faster than verifying it

Automation bias has been implicated in accidents including the Air France 447 disaster (where pilots trusted airspeed readings after the sensors had failed) and medication errors from CPOE system recommendations.

Design mitigations:

  • Require active confirmation rather than passive acceptance
  • Present data alongside the recommendation so users evaluate both
  • Occasionally test without a recommendation to maintain user skill
  • Display confidence levels and flag uncertain cases
  • Track override patterns to identify systematic failures
  • Design for calibrated trust, not maximised trust

As AI systems become more common in clinical and safety-critical contexts, automation bias becomes an increasingly important usability concern. The goal of AI interface design is not to maximise trust but to calibrate it — so users trust the system when it's likely right and question it when it might be wrong.

Related terms: Trust Calibration, Clinical Decision Support, Alert Fatigue, Cognitive Bias

Discussed in:

Also defined in: Textbook of Usability, Textbook of Medical AI