Glossary

AI Usability

AI usability refers to the design of interfaces to systems powered by artificial intelligence — including large language models, recommendation systems, computer vision tools, and autonomous agents. AI systems create interaction patterns that traditional usability principles only partially address.

Key challenges unique to AI systems:

  • Non-determinism — the same input may produce different outputs, undermining stable mental models
  • Unclear capabilities — users must guess what the system can do (discoverability problem)
  • Opaque reasoning — outputs arrive without transparent justification (explainability problem)
  • Over-trust and under-trust — users struggle to calibrate appropriate reliance (trust calibration problem)
  • Automation bias — tendency to accept AI recommendations without scrutiny
  • Error recovery — undo is harder when the user doesn't fully understand what was done

Design principles for AI interfaces (from Amershi et al., 2019 "Guidelines for Human-AI Interaction"):

  • Make clear what the system can do initially
  • Make clear how well the system can do it to set realistic expectations
  • Time services based on context — intervene when appropriate
  • Show contextually relevant information
  • Support efficient invocation, dismissal, and correction
  • Scope services when in doubt
  • Learn from user behaviour over time
  • Update and adapt cautiously
  • Encourage granular feedback
  • Convey the consequences of user actions

Existing usability principles (Nielsen's heuristics, Norman's principles) still apply but require adaptation. Visibility of system status becomes showing what the AI is doing and how confident it is. User control becomes the ability to stop, redirect, and undo AI actions. Error prevention becomes guardrails against harmful outputs.

Related terms: Automation Bias, Trust Calibration, Explainability, Nielsen's 10 Heuristics

Discussed in:

Also defined in: Textbook of Usability