- Define usability and distinguish it from related concepts such as user experience
- Explain why usability matters in safety-critical domains
- Describe the ISO 9241 framework for usability
- Outline the structure and scope of this textbook
- Understand the case for a scientific approach to design
Introduction
Every designed object — a door handle, a cockpit instrument panel, a hospital information system, a city street — mediates between a human being and a goal. When the mediation is seamless we barely notice it; when it fails the consequences range from mild frustration to serious harm. Usability is the discipline concerned with understanding and improving the quality of that mediation. This textbook presents usability as an applied science. Its foundations lie in experimental psychology, human factors engineering, and architecture, fields that have spent decades measuring human perception, cognition, and motor performance. From those measurements a set of quantitative laws and predictive models has emerged — Fitts's Law for pointing movements Fitts, 1954, Hick's Law for choice reaction time Hick, 1952, Miller's capacity limits for working memory Miller, 1956 — that allow designers to evaluate and compare designs without relying solely on intuition or post-hoc testing Card, 1983. Alongside these scientific models, centuries of evolved design practice in architecture, aviation, and other industries provide tested heuristics that complement the experimental evidence [Norman, 2016; Alexander, 1977].
Defining Usability
The International Organization for Standardization defines usability as the "extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use" Standardization, 2018. Three components are central to this definition NIELSEN, 1993. Effectiveness is the accuracy and completeness with which users achieve their goals. A medication ordering system is effective if the correct drug, dose, and route reach the patient; a wayfinding system is effective if visitors reach their destination without getting lost. Efficiency concerns the resources expended in relation to the results achieved. Time is the most common measure — how long does it take a nurse to document a set of vital signs? — but cognitive effort, physical effort, and error recovery costs are equally relevant. Satisfaction captures the user's subjective response: comfort, trust, and willingness to use the system again. A technically efficient interface that leaves users anxious or confused has a usability problem, even if task completion rates are high.
Usability is not a single property but a relationship between a user, a task, and a context. The same interface may be highly usable for an expert performing a routine task and unusable for a novice encountering it for the first time. Evaluation must always specify who the users are, what they are trying to do, and under what conditions.
Why Usability Matters
Poor usability is not merely an inconvenience. In healthcare, confusing drug selection interfaces have contributed to medication errors. In aviation, cockpit display designs that exceeded pilots' cognitive capacity played a role in accidents. In architecture, buildings that disorient their occupants create stress and reduce productivity.
Between 1985 and 1987, at least six patients receiving treatment from the Therac-25 radiation therapy machine were subjected to massive radiation overdoses, delivering up to a hundred times the intended dose Leveson, 1993. At least three died as a direct result of their injuries; others suffered severe burns and permanent disability. The accidents were not caused by a single faulty component but by an interaction between two design decisions. A software race condition, triggered by an operator editing parameters quickly at the terminal, could leave the machine configured to deliver a high-energy electron beam without the protective beam-spreading target in place. In earlier models — the Therac-6 and Therac-20 — a hardware interlock would have physically prevented the beam from firing under those conditions. For the Therac-25, the manufacturer had removed the hardware interlock and relied entirely on software to enforce safety. When the software failed, nothing else caught it; worse, the operator console displayed only a cryptic "Malfunction 54" error that gave the user no indication that a lethal dose had just been delivered. The Therac-25 is the single most-cited case study in safety-critical software engineering, and it will recur throughout this textbook as a touchstone for the argument that interface design, error reporting, and hazard analysis are not separate concerns from "real" engineering — they are at the heart of what makes a system safe.
A Scientific Approach to Design
This textbook argues that design need not be solely a matter of taste, intuition, or iterative trial and error. The human body and brain have measurable properties — the speed of visual processing, the capacity of working memory, the time required to move a hand to a target — and these properties constrain what designs will and will not work.
Fitts's Law Fitts, 1954 states that the time required to move to a target is a logarithmic function of the distance to the target divided by the target's width. This law, derived from information theory and confirmed by decades of replication, provides a quantitative basis for sizing and positioning interactive elements MacKenzie, 1992.
Structure of This Textbook
Part I: Foundations (Chapters 2–5) covers the human systems that constrain design: perception, memory, attention, and motor control. Each chapter introduces the relevant psychology and connects it to design implications. Part II: Laws and Models (Chapters 6–10) presents the quantitative models and heuristic frameworks that translate human science into design guidance, from the Model Human Processor and GOMS to design heuristics and the laws of architecture and aviation. Part III: Domains of Application (Chapters 11–14) examines usability in specific contexts — software, healthcare systems, the built environment, and data visualisation — where the general principles take on domain-specific forms. Part IV: Methods (Chapters 15–18) covers the practical toolkit for evaluating usability: testing with users, expert review, predictive modelling, and experimental design. Part V: The Future (Chapters 19–20) considers how artificial intelligence is changing both the practice of design and the evaluation of usability, and makes the case for treating design as a cumulative, evidence-based science.
Key Takeaways
- Usability is defined by the ISO 9241 triad of effectiveness, efficiency, and satisfaction, always relative to specific users, tasks, and contexts.
- Poor usability has measurable costs in safety, productivity, and trust.
- Human perception, cognition, and motor performance have quantifiable properties that constrain design.
- Scientific laws (Fitts's Law, Hick's Law) and evolved design practices (architectural proportions) provide complementary sources of evidence for design decisions.
- This textbook presents usability as an applied science, integrating experimental evidence with centuries of design practice.
Further Reading
- Norman, D. A. (2013). The Design of Everyday Things (revised and expanded edition). Basic Books.
- ISO 9241-11:2018. Ergonomics of human-system interaction — Part 11: Usability: Definitions and concepts. International Organization for Standardization.
- Card, S. K., Moran, T. P., & Newell, A. (1983). The Psychology of Human-Computer Interaction. Lawrence Erlbaum Associates.
- Nielsen, J. (1993). Usability Engineering. Academic Press.
- Leveson, N. G., & Turner, C. S. (1993). An investigation of the Therac-25 accidents. Computer, 26(7), 18–41.