Visit NAP.edu/10766 to get more information about this book, to buy it in print, or to download it as a free PDF.
Suggested Citation:"8 Methods for Evaluation." National Research Council. 2007. Human-System Integration in the System Development Process: A New Look. Washington, DC: The National Academies Press. doi: 10.17226/11893.
This chapter presents two classes of methods for evaluating human performance and the interaction between humans and systems. The first class of methods, risk analysis, discusses the approaches to identifying and addressing business risks and safety and survivability risks. The second class of methods, usability evaluation, describes the range of experimental and observational approaches used to determine the usability of system features in all stages of the system development life cycle. Figure 8-1 provides an overview. This figure lists the foundational methods (e.g., surveys, interviews, experiment design) noted in the Introduction to Part II because they play a central role in evaluation.
This section describes some commonly used tools for risk management, including failure modes and effects analysis (FMEA) and fault tree analysis (FTA). These tools are flexible and can be used to assess, manage, and mitigate
Suggested Citation:"8 Methods for Evaluation." National Research Council. 2007. Human-System Integration in the System Development Process: A New Look. Washington, DC: The National Academies Press. doi: 10.17226/11893.
FIGURE 8-1 Representative methods and sample shared representations for evaluation.
The emphasis is on use of these tools to evaluate and control negative outcomes related to use error or errors resulting from defects in the user interface element of human-system integration. By simple extensions, they can also be used to evaluate and control business risk related to the development cycle. Most of the following text is focused on use errors, but we make the case for the relative ease of using the philosophy behind these tools for many other purposes, including assessing and controlling business risk. In the military, the analysis of use error is especially relevant to the HSI domains of human factors, safety and occupational health, and survivability.
As noted, these tools and related methods are frequently applied to understanding use errors made with medical and other commercial devices.
Suggested Citation:"8 Methods for Evaluation." National Research Council. 2007. Human-System Integration in the System Development Process: A New Look. Washington, DC: The National Academies Press. doi: 10.17226/11893.
Use errors are defined as predictable patterns of human errors that can be attributable to inadequate or improper design. Use errors can be predicted through analytical task walkthrough techniques and via empirically based usability testing. Here we explain and discuss the special methodology of use-error focused risk analysis and some of its history. Examples are presented that illustrate the methods of use-error risk analysis such as FTA and FMEA and some pitfalls to be avoided. These methods are widely used in safety engineering. The concepts are illustrated with a medical device case study using an automatic external defibrillator and a business risk example.
Risk analysis in the context of use errors in products and processes has received increasing attention in recent years, particularly for medical devices. These techniques have been used for decades to assess the effect of human behavior on critical systems, such as in aerospace, defense systems, and nuclear power applications. Use errors are defined as a pattern of predictable human errors that can be attributable to inadequate or improper design. Use error can also produce faults that create failures for many types of systems and products, including
Use error is characterized by a repetitive pattern of failure that indicates that a failure mode is likely to occur with use and thus has a reasonable possibility of predictability of occurrence. Use error can be addressed and minimized by the device designer and proactively identified through the use of such techniques as usability testing and hazard analysis. An important point is that, in the area of medical products, regulator and standards bodies make a clear distinction between the common terms “human error” and “user error” in comparison to “use error.” The term “use error” attempts
Suggested Citation:"8 Methods for Evaluation." National Research Council. 2007. Human-System Integration in the System Development Process: A New Look. Washington, DC: The National Academies Press. doi: 10.17226/11893.
to remove the blame from the user and open up the analyst to consider other causes, including the following
The analysis of human error has played a central role in risk analysis since the 1950s. Initially in nuclear weapons assembly, then in the nuclear power industry and in industry more generally, particularly after the Three Mile Island accident in 1979. Although in this chapter, risk analysis focuses on safety critical systems, the risk of human error is relevant to human-system integration more generally because errors can also result in inefficiencies, excessive cost of operations, and wasted resources.
Reason (1990) provides a comprehensive classification of errors as shown in Figure 8-2. This classification makes clear that even though every error is identified by an action, the source of the error can be a much wider set of alternative failures. The category of knowledge-based mistakes can be expanded to include the many additional psychological sources of mistakes, including the following:
Embry (1987) summarizes approaches to human reliability assessment—that is, assessment of the risk of human error. The oldest and most well-known technique is the technique for human error rate prediction (THERP) (Swain, 1963; Swain and Guttman, 1983). This approach is based on probabilistic risk analysis and fault tree task decomposition methods, and it has been applied extensively in nuclear power plant design and procedure assessment. The techniques described in this chapter are the basic building blocks of quantitative methods, such as THERP; the degree to which complex models involving estimates of error probability are necessary depends largely on the application and extent to which quantifica-
Suggested Citation:"8 Methods for Evaluation." National Research Council. 2007. Human-System Integration in the System Development Process: A New Look. Washington, DC: The National Academies Press. doi: 10.17226/11893.
FIGURE 8-2 Reason’s error classification. SOURCE: Reason (1990). Reprinted with the permission of Cambridge University Press.
tion is necessary. However, basic error risk analysis as described in this chapter is relatively straightforward and is warranted in virtually all HSI applications.
An important first step in risk management is to understand and catalogue the hazards and possible resulting harms that might be caused by a product or system. Sometimes this is called hazard analysis. Others use the term in a more general way as a synonym for risk-management. Hazard analysis is often accomplished as an iterative process, with a first draft being updated and expanded as additional risk management methods (e.g., FMEA, FTA) are used. Medical experts and those in quality control and product development, among other commercial product disciplines, can brainstorm on harms and hazards. Technically, hazards are the potential for harms. Harms are defined as physical injury or damage to the health of people or damage to property or the environment. Box 8-1 shows examples
Suggested Citation:"8 Methods for Evaluation." National Research Council. 2007. Human-System Integration in the System Development Process: A New Look. Washington, DC: The National Academies Press. doi: 10.17226/11893.
BOX 8-1
Possible Harms and Hazards from the Use of Medical Equipment
Use of an Automatic Needle Injection Device
Use of an Automatic External Defibrillator
of harms from hazards for a penlike automatic needle injector device and shows similar harms and hazards from an automatic external defibrillator. Box 8-2 extends the notion of harm to negative business outcomes resulting from HSI faults.
Below we describe the most commonly used tools involved in user error risk analysis, FMEA, and FTA. These tools can also be used to assess and control business risk. The shared representations typically resulting from these methods are reports containing graphical portrayals of the fault trees or tabular descriptions of the failure modes. The FTA representations show cumulative probabilities of logically combined fault events demonstrating the overall risk levels. The tabular shared representations documented with FMEA tables show calculated risk levels associated with different business or operational hazard outcomes.
Suggested Citation:"8 Methods for Evaluation." National Research Council. 2007. Human-System Integration in the System Development Process: A New Look. Washington, DC: The National Academies Press. doi: 10.17226/11893.
BOX 8-2
Negative Business Outcomes Resulting from HSI Faults
The recommended steps for conducting a use-error risk analysis are the same as for traditional risk analysis with one significant addition, namely the need to perform a task analysis. Possible use errors are then deduced from the tasks (Israelski and Muto, 2006). Each of the use errors or faults is rated in terms of the severity of its effects and the probability of its occurrence. A risk index is calculated by combining these two elements and can then be used for risk prioritization. For each of the high-priority items, modes (or methods) of control are assumed for the system or subsystem and reassessed in terms of risk. The process is iterated until all higher level risks are eliminated and any residual risk is as low as reasonably practicable (sometimes referred to as ALARP).
Among the most widely used of the risk analysis tools is FMEA and its close relative, failure modes, effects, and criticality analysis (FMECA). 1
FMECA is an extension of FMEA that starts with FMEA elements and further considers ratings of criticality and probability of occurrence. Because of their common basis, FMEA and FMECA are commonly referred to as FMEA. Likewise, in this section FMEA and FMECA will be referred to as FMEA.
Suggested Citation:"8 Methods for Evaluation." National Research Council. 2007. Human-System Integration in the System Development Process: A New Look. Washington, DC: The National Academies Press. doi: 10.17226/11893.
FMEA is a design evaluation technique used to define, identify, and eliminate known or potential failures, problems, and errors from the system. The basic approach of FMEA from an engineering perspective is to answer the question: If a system component fails, what is the effect on system performance or safety? Similarly, from a human factors perspective, FMEA addresses the question, “If a user commits an error, what is the effect on system performance from a safety or financial perspective?” A human factors risk analysis has several components that help define and prioritize such faults: (1) the identified fault or use error, (2) occurrence (frequency of failure), (3) severity (seriousness of the hazard and harm resulting from the failure), (4) selection of controls to mitigate the failure before it has an adverse effect, and (5) an assessment of the risk after controls are applied.
A use-error risk analysis is not substantially different from a conventional design FMEA. The main difference is that, rather than focusing on component or system-level faults, it focuses on user actions that deviate from expected or ideal user performance. For business risk, the development faults would include the items shown in Box 8-2. Table 8-1 summarizes the steps in performing FMEA.
Other commonly used tools for analyzing and predicting failure and consequences are fault tree and event tree analysis. FTA is a top-down deductive method used to determine overall system reliability and safety (Stamatis, 1995). A fault tree, depicted graphically, starts with a single undesired event (failure) at the top of an inverted tree, and the branches show the faults that can lead to the undesired event—the root causes are shown at the bottom of the tree. For human factors and safety applications, FTA can be a useful tool for visualizing the effects of human error combined with device faults or normal conditions on the overall system. Furthermore, by assigning probability estimates to the faults, combinatorial probabilistic rules can be used to calculate an estimated probability of the top-level event or hazard.
An event tree is a visual representation of all the events that can occur in a system. As the number of events increases, the picture fans out like the branches of a tree. Event trees can be used to analyze systems that involve sequential operational logic and switching. Whereas fault trees trace the precursors or root causes of events, event trees trace the alternative consequences of events. The starting point (referred to as the initiating event) disrupts normal system operation. The event tree displays the sequences of events involving success and/or failure of the system components. In human factors analysis the events that are traced are the contingent sequences of human operator actions (Swain and Guttman, 1983).
Suggested Citation:"8 Methods for Evaluation." National Research Council. 2007. Human-System Integration in the System Development Process: A New Look. Washington, DC: The National Academies Press. doi: 10.17226/11893.