GOMS is a family of methods for analysing and predicting the time expert users take to perform tasks. Developed by Card, Moran, and Newell alongside the Model Human Processor, GOMS stands for:
- Goals: the user's objectives at various levels of abstraction
- Operators: atomic actions the user performs (keystrokes, mouse movements, mental preparations)
- Methods: sequences of operators that accomplish goals
- Selection rules: criteria for choosing among alternative methods
GOMS analyses model expert, error-free performance. They predict how long a task takes when the user knows what to do and does it correctly. They do not model learning, errors, or exploratory behaviour.
Several variants exist, differing in detail and effort required:
- Keystroke-Level Model (KLM): the simplest variant — sum operator times to predict task time
- CMN-GOMS: represents goal hierarchies and selection rules explicitly
- NGOMSL (Natural GOMS Language): adds learning time predictions (~10 s per rule)
- CPM-GOMS: models parallel operation of perceptual, cognitive, and motor subsystems
GOMS is most valuable for comparing alternative designs for routine tasks before prototyping, identifying workflow bottlenecks, and justifying design decisions with quantitative predictions. It is inappropriate for novel, exploratory, or error-prone tasks.
Related terms: Keystroke-Level Model, Model Human Processor, CPM-GOMS, Cogulator
Discussed in:
- Chapter 7: GOMS and the Keystroke-Level Model — The GOMS Framework
Also defined in: Textbook of Usability