10
Conclusions and Recommendations

In this chapter, we report our broad conclusions related to each of the themes we introduced at the start of the report. These conclusions reflect detailed consideration of (1) our research into current views of systems engineering, (2) what the committee learned is needed to meet the requirements for adequate support for the role of humans in systems, (3) our survey of the methods and tools available to support what is needed, and (4) our assessment of the state of the art in human-system integration (HSI).

Our most fundamental conclusion is that human performance and human-system integration will never be most effective in system design unless it is seen by all stakeholders as an integral part of the entire systems engineering process, from initial exploration and concept evaluation through operational use, reengineering, and retirement. Many systems have failed because the role of humans was considered only after design problems were identified—when it was too late to make the kind of changes that were required to produce systems responsive to users’ needs. We conclude that the definition of user requirements should begin when the system is first being conceived, and those requirements should continue to provide important evaluation criteria right up to the time the system is placed in use.

The military services are recognizing the need for more emphasis on human considerations in design through the introduction of MANPRINT, SEAPRINT, and, most recently, AIRPRINT requirements. More and more commercial software, hardware, and service industries are beginning to realize that commercial success requires attention to the customer’s needs and that achieving that success has implications for the product engineering



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 296
Human-System Integration in the System Development Process: A New Look 10 Conclusions and Recommendations In this chapter, we report our broad conclusions related to each of the themes we introduced at the start of the report. These conclusions reflect detailed consideration of (1) our research into current views of systems engineering, (2) what the committee learned is needed to meet the requirements for adequate support for the role of humans in systems, (3) our survey of the methods and tools available to support what is needed, and (4) our assessment of the state of the art in human-system integration (HSI). Our most fundamental conclusion is that human performance and human-system integration will never be most effective in system design unless it is seen by all stakeholders as an integral part of the entire systems engineering process, from initial exploration and concept evaluation through operational use, reengineering, and retirement. Many systems have failed because the role of humans was considered only after design problems were identified—when it was too late to make the kind of changes that were required to produce systems responsive to users’ needs. We conclude that the definition of user requirements should begin when the system is first being conceived, and those requirements should continue to provide important evaluation criteria right up to the time the system is placed in use. The military services are recognizing the need for more emphasis on human considerations in design through the introduction of MANPRINT, SEAPRINT, and, most recently, AIRPRINT requirements. More and more commercial software, hardware, and service industries are beginning to realize that commercial success requires attention to the customer’s needs and that achieving that success has implications for the product engineering

OCR for page 296
Human-System Integration in the System Development Process: A New Look team, not just the marketing and sales teams. It begins with product conceptualization and continues throughout the product development cycle. As a process for integrating human considerations into the systems engineering process, the committee has built on the strengths of existing systems engineering process models (waterfall, V-model, concurrent, incremental, spiral, evolutionary, agile) to synthesize an incremental commitment model (ICM) that helps to situate HSI activities within a system’s life cycle. As described in the introduction, this model is based on five critical success factor principles: (1) negotiation to “satisfice” system stakeholders’ (e.g., users, acquirers, developers) requirements; (2) incremental growth of system definition and stakeholder commitment; (3) concurrent system definition and development; (4) iterative system definition and development; and (5) risk management. Incremental commitment model is consistent with current approaches to systems engineering, including the U.S. Department of Defense (DoD) 5000 series of system acquisition policies and guidelines, and provides the kind of emphasis that the committee considers important to achieving human-system integration. Although it is not the only model that could be used on future human-intensive systems, it has served as a reasonably robust framework for explaining the study’s HSI concepts, and for evaluating these via the case studies in Chapter 5. However, there are ways to extend or reinterpret existing process models to accommodate the five critical success factor principles and HSI activities. In the paragraphs below we build on the six themes first mentioned in the introduction, and highlight features based on them that require special attention from the perspective of human-system integration. Begin HSI contributions to development early and continue them throughout the development life cycle. If there were a single message to communicate to program managers and developers, it would be to understand that HSI expertise is important from the very beginning of the life cycle, when systems are first being conceived. HSI specialists are trained to explore and understand the environment in which a system will be used. In order to develop an operational concept, full understanding of the context of use is required. These factors need to be assessed even before a conceptual design is put forward. Human factors specialists have a collection of methods and tools for efficiently understanding the system environment and context of use. Consideration of these factors early can have orders-of-magnitude impacts on system performance. If human factors and other HSI input are left until the test and evaluation stage, only small-percentage improvements can be realized under the best of circumstances, and there is a risk that the system will not satisfy the original goals. We have also emphasized that system development needs to be an iterative process, and that there are human-system design considerations that evolve and need to be iterated along with every other aspect of system development.

OCR for page 296
Human-System Integration in the System Development Process: A New Look Integrate across human-system domains as well as across the system life cycle. The domains identified in the MANPRINT methodology—human factors, manpower, personnel, training, system safety and health, habitability, and survivability, the first five of which are potentially as relevant to commercial products as to military systems—are not independent, and consideration of them must not be treated separately (i.e., “stove-piped”). While each has its own methods, there are many areas in which the methodologies we describe in Part II can serve multiple purposes across the domains and do not have to be analyzed for each. For example, task analysis, risk analyses, and workload analysis can support human factors, manpower, training, and safety. Ergonomic analysis can support human factors, training safety, and health hazards. For it to do so requires that the individual specialists in each area cooperate up front to ensure that the resulting shared representations meet the requirements of all the domains. This is a critical aspect of negotiation to “satisfice” system stakeholders’ requirements. Adopt a risk- and opportunity-driven approach to determining needs for HSI activity. At each of the system development milestones, the systems engineering team undertakes an analysis of the development risk and opportunities before proceeding to the next milestone. It is essential that the HSI team contribute an evaluation of HSI risks and opportunities to be considered in collaboration with the rest of the system engineering team. It is through the risk analysis that the argument may be made for assigning resources to evaluate particular risks further or to find ways to mitigate the risks that the system will fail, for example, because of safety risk, risk that it will be too costly to train the personnel in its use, or risk that it will be maladapted to the people who must use it. In addition, considering opportunities may allow the HSI team to improve program execution and system capabilities. There is often a tendency for the HSI team to insist on a complete HSI analysis. The purpose of the risk and opportunity analysis is to focus attention on the risks whose likelihood and seriousness are both appreciable, as well as the opportunities with the greatest payoffs. It will also serve to identify the areas of development in which the risks are minimal and do not need further attention. HSI risk and opportunity analysis becomes a component of the overall system development risk analyses and is given equal importance to other system risk factors. The use of human-sensitive mission effectiveness models, simulations, and exercises can be highly effective in this regard. If there are integrated product teams (IPT) for which HSI issues are relevant, there should be at least one HSI representative on each such team, and that person should be responsible for ensuring that the HSI risks and opportunities are considered.

OCR for page 296
Human-System Integration in the System Development Process: A New Look Tailor methods to time and budget constraints. Every system development takes place under time and budget constraints. It is not possible to undertake full-scale HSI evaluation of every aspect of a system development. Early in the life cycle, as a part of the iterative system definition and development, it is important to evolve the human-system requirements, prepare the HSI part of the business case for designing and fielding the system, and undertake the risk and opportunity analysis. The business case should include quantitative performance objectives based on human capacities and limitations. From that point on, it is important that the HSI team, driven by its risk, opportunity, and requirements analyses, focus further attention on the critical issues and requirements identified in the risk/opportunity analysis only. With respect to each identified issue, they should evaluate the analysis requirements carefully, consider alternative approaches to achieving them, and select the methods and tools that are most cost-effective for answering the questions at hand. The proposed budget should be based on a careful but realistic analysis of what needs to be done to satisfy the critical and most risky requirements. Doing so will gain the respect and confidence of the program manager and will improve the chances that adequate budget will be provided. Ensure communication among stakeholders of HSI outputs. Many of the contributions of the HSI team—especially those that are developed early in the development process—tend to be based on observation, interview, and questionnaire methods. The individuals who collect the data become the most knowledgeable about the characteristics of the system environment and context of use. Similarly, the knowledge acquisition associated with developing task and process analysis results in very rich information in the heads of the analysts. However, much of this information is needed by all the system stakeholders, from the funders and program managers to the detail designers and developers. In following the principle of negotiating to satisfice all stakeholders’ requirements, it is very important that the HSI team provide outputs and deliverables that capture the information and its interpretation in forms that are understandable and usable by these stakeholders—we have called them shared representations. We have discussed the kinds of methods to be used for generating the needed information and, for each method, suggested that the kinds of shared representations we recommend should be developed as the outputs. Effort should be made to create these shared representations in a form that is readily assimilated into the engineering process, that is, expressed in terms that are compatible with other engineering outputs. This might be accomplished through the generation of scenarios of use, models and/or simulations based on the task analysis output, or analyses of the context of use. Effective shared representations can be very helpful in smoothing the flow of information among team members and in ensuring that the HSI team output is influential.

OCR for page 296
Human-System Integration in the System Development Process: A New Look Design to accommodate changing conditions and requirement in the workplace. There have been and are continuing to be significant changes in many factors that influence the way that work gets accomplished and the nature and complexity of the systems that are developed. Personnel costs are a significant percentage of the operational cost of systems, and everywhere there is pressure to reduce the numbers of personnel. Technology is often seen as the panacea to reduce personnel costs, increase efficiency, and improve safety. Technological evolution has become much more rapid, and the systems developed last year may be already out of date. It is impossible to capture all the requirements up front, so it is valuable to develop systems that can be more easily adapted or modified in order to continue to provide support as the work context changes. The ultimate ideal is to create evolvable systems that can be “appropriated” (or reinvented) by the users and tailored to meet the inevitable changes that will arise. This argument is consistent with the principles of incremental growth of system definition and concurrent system definition and development—the idea that requirements should not be assumed to be fixed but instead expected to evolve over the life cycle of the system. The design of systems of systems involves a level of complexity and challenge much greater than the design of individual complex systems themselves. For example, the military is designing command and control systems that span the activities of many logistics, battlefield operations, manned and unmanned aerial systems, and multinational forces. Telephone companies are now faced with integrating digital phone systems with cell phones, Internet access, and television delivery systems. Hospital information systems must be integrated with the accounting, nursing unit, pharmacy, and individual physician’s workstations, not to mention supply systems and inventory control, and they must do it in a way that promotes patient safety. Complex systems of systems demand new approaches to uncover the multiple points of interdependency across systems and anticipate their impacts on the people operating in those environments. New envisioning methods and modeling tools are needed to predict the kinds of challenging situations that are likely to arise, the kinds of adaptations that will be required of people to cope with new complexities, and the kinds of errors and failures that may emerge in the future (Woods and Dekker, 2000; Woods, 2002; Winograd and Flores, 1987; Feltovich et al., 2004). The ability to anticipate likely reverberations of technology insertions early in the design process can contribute substantively to the design of complex systems and systems of systems that are resilient in the face of a wide range of operational perturbations (Hollnagel, Woods, and Leveson, 2006). The emergence of systems of systems further emphasizes the importance of considering human-system integration as an integral part of the

OCR for page 296
Human-System Integration in the System Development Process: A New Look development process. We have highlighted the value of iterative design and the role that shared representations and especially models and simulations can play in ensuring that all stakeholders remain informed about the current state of development. In this kind of very dynamic development environment, it is important to keep in mind the potential for changes after the system is implemented. Information currency is likely to be a very important consideration, since design requirements can change with each new iteration. In Chapter 2, we described a procedure for accommodating rapid change and high assurance through incremental development. Design for evolvable systems requires anticipating the scope of changes that might take place, making the design modular, leaving appropriate entry points for the changes, providing thorough software documentation, and providing scalable service-oriented architectures. RESEARCH AND POLICY RECOMMENDATIONS These recommendations identify further critical steps to facilitate the kind of integration into systems engineering that we consider of paramount importance. Our intent is to provide sufficient detail to guide the development of a research plan and the formulation of policy initiatives for the DoD and other government and private organizations. The recommendations are organized into four areas: (1) realizing the full integration of human systems and systems engineering; (2) methods for defining opportunities and the context of use; (3) methods for defining requirements and design; (4) and methods for evaluation. Accomplishing these steps will provide needed support to realize the future scenarios outlined in Chapter 9. The committee was not able to prioritize these research recommendations as they cover diverse areas of equal importance. We believe work in these areas should proceed concurrently. Realizing the Full Integration of Human Systems and Systems Engineering This report presents the incremental commitment model as an example framework for system development activities and discusses how human-system integration fits within this framework. Here we present our policy and research recommendations regarding the principal areas of research, development, and policy initiatives needed to facilitate integration throughout the development life cycle and across HSI disciplines. These areas include Institutionalizing the success factors associated with the incremental commitment model. Accommodating the emergence of HSI requirements.

OCR for page 296
Human-System Integration in the System Development Process: A New Look Ensuring that HSI operational requirements are included in the initial system development contract and acquisition documents. Managing integrated system development. Providing traceability of HSI objectives, decision points, and the rationale for decisions across life-cycle design phases. Developing approaches to human-system integration and systems of systems research. Sizing the HSI effort. Designing shared representations to facilitate communication across disciplines and life-cycle phases. Creating knowledge-based planning aids for providing HSI information. Developing human-system integration as a discipline and as a lead for the IPT. Fostering more synergy between research and practice. Institutionalizing a System Development Process Based on the Success Factors Through our analyses of more and less successful HSI projects, our evaluation of alternative HSI process models, and our case studies, the committee makes the case that a model like the incremental commitment model better enables the kind of human-system integration that will be needed for the complex, human-intensive systems of the future. It embodies the success factor principles of stakeholder satisficing, incremental growth of system definition and stakeholder commitment, iterative system development and definition, concurrent system definition and development, and risk-driven activity levels, product levels of detail, and anchor point milestones. However, it does this in clearer ways than the spiral model, particularly for HSI considerations, and it does so in a manner compatible with the DoD acquisition milestones and the commercial IBM/Rational Unified Process and the Eclipse Process Framework OpenUP milestones. It provides a process framework for the top-level recommendation of realizing the full integration of human engineering and systems engineering. Recommendation: The U.S. Department of Defense and other government and private organizations should refine and coordinate the definition and adoption of a system development process that incorporates the principles embodied in the incremental commitment model. It should be adopted as the recommended approach for realizing the full integration of human-related design considerations with systems engineering in organizational policies and process standards, such as the DoD 5000 series and the ISO systems engineering standards.

OCR for page 296
Human-System Integration in the System Development Process: A New Look Accommodating the Emergence of HSI Requirements Particularly for complex systems of systems and for collaboration-intensive systems, human-system interface states, modes, and functional requirements are not known at the time of program initiation. Many current system acquisition policies and standards require these human considerations to be fully defined before proceeding into development. Although it is risky to leave HSI requirements completely undefined, it is equally risky to insist on defining them before they are fully understood or allowed to emerge through experience. A reasonable middle approach is to use incremental and evolutionary development processes and to define HSI requirements in terms of capabilities, with more detail provided for later increments, but sufficient detail provided for earlier increments to ensure proper preparation for the later increments. This approach is consistent with the principle of risk-driven levels of product detail. Recommendation: The U.S. Department of Defense and other government and private organizations should revise current system acquisition policies and standards to enable incremental, evolutionary, capabilities-based system acquisition that includes HSI requirements and uses risk-driven levels of requirements detail, particularly for complex systems of systems and for collaboration-intensive systems. HSI Operational Requirements in Contracts and Acquisition Documents In discussing risk management, we have alluded to the importance of considering HSI aspects when negotiating baseline metrics for program execution. This negotiation is a critical phase in product development, when estimates and assumptions are formulated and agreed on by all stakeholders. Customer requirements and value propositions, technical performance measures that measure compliance with technical requirements, schedule milestones, and requisite resources all contribute to that negotiation. Involving HSI practitioners in the negotiation process ensures that their perspective and knowledge are accounted for, increasing the likelihood that HSI risks and issues will not arise during program execution. This recommendation focuses on policy, rather than research, and addresses the need to have human-system integration considered in establishing program execution baselines. Key to successful contract execution, resulting in an end product that fills a specified role and meets operational needs, are crisp requirements that have been properly vetted.

OCR for page 296
Human-System Integration in the System Development Process: A New Look Recommendation: The U.S. Department of Defense and other government and private organizations should put the operational requirements of human-system integration on a par with traditional engineering requirements at the beginning of initial requirements analyses to determine which requirements have priority and provide an opportunity for negotiation. Recommendation: When developing system acquisition programs, the U.S. Department of Defense and other government and private organizations should define potential means for verifying and validating HSI requirements to enable supplier program managers to establish clearly specifiable HSI technical performance measures for contracts. The procuring agency has the ability to drive contractor HSI efforts by seeding the extent to which HSI considerations are accounted for contractually and their degree of importance. Without the inclusion of HSI considerations throughout program definition efforts, contractors have limited basis for addressing HSI considerations in their business offer. Recommendation: The U.S. Department of Defense and other government and private organizations should account for HSI considerations in developing the technical, cost, and schedule parameters in the business offer. In particular, contracts need to reflect an understanding of how human-system integration affects the ability to reuse existing technical solutions or the feasibility of inserting new technologies, as well as an appreciation of how anticipated HSI risks may affect meeting program award fee criteria. It is also important that the contractor understand how HSI elements in their product offering contribute to achieving market capture goals and subsequently the viability of their business case. Overall, the procuring agencies are able to directly influence the extent to which HSI elements are addressed in contracts by establishing well-articulated HSI requirements reflective of end-user needs and working with the contractor to establish verification and validation methods that overcome program management concerns about the typically subjective nature of HSI elements. The contractors or suppliers should take the time to involve HSI practitioners in their business development efforts to account for HSI elements in the business offer, thereby mitigating a portion of potential HSI risks and issues that may arise during program execution.

OCR for page 296
Human-System Integration in the System Development Process: A New Look Managing Integrated System Development so That All Representations Are Kept in Synchronization In our vision for an integrated system development methodology, a serious concern is configuration control of the various entities that are being developed in order to support it. It is likely that new developments in web technology will be able to support some of these requirements. Recommendation: Explore the usefulness of the technologies associated with Web 2.0 and related web developments for providing support for configuration control and synchronization of the component representations in a large system development project as they evolve and become more quantitatively defined. Recommendation: Support a research program to explore how to provide flexible and open systems with appropriate security protections. The apparent conflict between openness and protection is not a matter of balance or trade-off, but rather of providing strong forms of both attributes. Traceability and Requirements The committee has argued for the importance of capturing the context of use in a form that can inform later phases of design. This is important to ensure that operational objectives and constraints and their design implications are taken into account in the system design process, so that the final “as-built” system meets the support objectives and constraints identified in earlier phases. This goal can be met only if methods and tools facilitate capture and traceability of HSI design objectives, decision points (together with the rationale for those decisions) and constraints across design phases. Our vision is to adapt existing tools or to develop new software tools to facilitate the traceability of HSI design objective implications and how they are being met to ensure that they are preserved across design phases. This includes traceability across multiple intermediate human-system integration shared representations, starting with (1) outputs of context of use analyses that specify domain demands, stakeholder objectives, human performance needs, and design implications; through (2) the products of intermediate design phases, such as scenarios, personas, models, and prototypes; through (3) the decision rationale and system hardware and software design specifications intended to reflect the support objectives embodied in the design concepts; through (4) the final as-built system. Traceability across design phases is important to ensure that HSI objectives and constraints are preserved across design phases or when modification or redesign is un-

OCR for page 296
Human-System Integration in the System Development Process: A New Look dertaken. It also makes it easier to assess whether the as-built system meets the operational and support objectives and design implications uncovered by earlier design phases. Recommendation: Adapt existing or develop new methods and tools that facilitate capture and traceability of HSI design objectives, design rationale, and constraints across design phases. Specifically: Develop shared representations that effectively communicate how the output of one design activity meets the objectives, design rationale, constraints, and design implications uncovered in the prior design phase. Develop shared representations that effectively communicate essential design characteristics and their rationale that can be interpreted and used by multiple system development stakeholders—including individuals that did not participate in earlier design activities (see Wampler et al., 2006, for an example of an effort toward this goal). Adapt existing and develop new software tools to support traceability and update as changes arise in later design phases that require updates to outputs of earlier design phases. Adapt existing and develop new tools and techniques for explicitly connecting HSI objectives and design implications to higher level system requirements tracked in formal system requirements tracking systems. This is important to ensure explicit links between HSI design objectives and system-level requirements that reflect contractual commitments. Adapt existing and develop new methods for generating scenarios that reflect the range of complexities uncovered by context of use analyses. This corpus of scenarios can be used to support development and evaluation of designs, procedures, and training, including human reliability and safety analyses. They could also be used to exercise models and simulations as part of the system development process. The goal would be to ensure that the systems have been explicitly designed and tested to support performance across a comprehensive range of representative situations, as identified by context of use analyses. Context of use scenarios are also essential to the meaningful definition of such key performance parameters as response time, reliability, and accuracy. Develop methods to identify meaningful human (and joint person-computer system) performance metrics that can provide the basis for objective system acceptance criteria. This is important to encourage incorporating HSI objectives as part of formal contractual

OCR for page 296
Human-System Integration in the System Development Process: A New Look “opt-in” and “opt-out”?1 How can the available privacy options be effectively presented and explained to the users? What technology and user experience are required to allow users to define and implement their own data protection and security policies? Examine the programs of nonprofit organizations that have proposed to store users’ data in a protected repository, so that users can negotiate for some benefit in exchange for allowing other organizations to access and use their data.2 How can these options be implemented technically on a large-scale (market) basis? How can these options be effectively presented and explained to the users? What commercial models of benefit for personal data access transactions should be available? What fraud protections are possible? What are effective mechanisms through which users can (a) make their personal data available to third parties and then, upon need, (b) withdraw both their permission and their data from those third parties? Explore ways in which large data sets of user information could be made available to authorized users and yet be protected from unauthorized users. Determine ways to (a) detect unauthorized access, (b) record the extent of unauthorized access to data stores, and (c) automatically notify affected users, so they can know what kinds of self-protection to invoke following such unauthorized access. Methods for Defining Requirements and Design The committee makes recommendations concerning the research and development needs related to human-system development and to developing prototypes of organizations and training programs. Human-System Model Development Human-system models have been shown already to be useful in the system acquisition and development process as a means to reduce uncertainty and development risk; however, they are not employed to the extent that even the current state of development would justify. There is a perception that models that reflect human performance characteristics are too hard to 1 In an opt-in approach, the user’s permission (e.g., to store or share data) is explicitly requested, and no action is taken unless the user takes an action to permit storage or sharing of data. In an opt-out approach, the user is informed that storage and/or sharing will be done unless the user takes action to prevent it or to revoke permission. In this latter case, the burden is on the user to prevent the storage or sharing of her or his data. In general, users prefer opt-in approaches, whereas merchants prefer opt-out approaches. 2 See http://www.attentiontrust.org.

OCR for page 296
Human-System Integration in the System Development Process: A New Look use or understand. Potential users focus on the limitations and not on the advantages. In fact, models exist at all levels of complexity from simple mathematical expressions to complex computer programs. That said, it is true that the more sophisticated models, particularly those derived from discrete event simulators and cognitive architectures, are often brittle, costly, and time-consuming to develop and are not yet well validated for all uses in design. There is a wide variety of both research developments and policy changes that have the potential to impact the usefulness and usability of human-system models. Recommendation: Conduct an in-depth study of how human-system models are created, used, and shared, together with their strengths and limitations. The study should consider not only the various structures and architectures in which to build models, but also how data are acquired and represented in these models. What makes a model easy or difficult to use? To what extent are models reusable? Why aren’t they reused more often? Such a study would support improved education about how to develop models as well as provide recommendations for improving the quality, robustness, usefulness, and usability of the models that are developed. The study should include a retrospective review of a range of models, such as Fitts’s law, signal detection theory, GOMS, Micro-Saint-based models, to complex cognitive architectures, such as ACT-R and EPIC. Recommendation: Pick, as a case study, a class of models at an intermediate level of complexity and invent a high-level human-system model development language, having as its goal to make building such models as simple as customizing an Excel spreadsheet to a specific application. Recommendation: Explore the applicability of computer learning and adaptation algorithms for growing more robust models. Currently, in models such as IMPRINT, the user models included in the systems are useful, but the theories from which they are derived often lead to a basically linear, single thread model of human attention to tasks. Increasingly, multiple task management, the impact of interruptions, and the role of situation awareness for decision making and planning are important in complex system analysis. Recommendation: Expand the fidelity of the user representations to include these aspects of behavior and how these aspects change with time on task, workload, heat, stress, and other behavior moderators.

OCR for page 296
Human-System Integration in the System Development Process: A New Look Recommendation: Expand models, particularly human behavior representation and cognitive models, to include the effects of culture, social processes, and emotion. This will also require gathering additional data, as many studies in these areas are not performed with the application to models in mind. There is much research on validating models, and it is recognized as a very complex and difficult problem. The consensus is that face validity is inadequate, but that achieving “application validity” is realistic and should be required. Application validity is defined as the degree to which a model or simulation is a faithful representation of the real world from the perspective of the intended users. Models are developed for specific purposes, and it is validation with respect to those purposes that is important. Policy Recommendation: Require all human-system performance models that are to be used in system acquisition risk reduction to meet the standards of application validity. Recommendation: At a research level, better validation criteria need to be created. How good is good enough? Better model validation criteria are needed for specific model types and for models in general. Currently, when models are applicable, how much risk they reduce, or how valid they need to be to reduce risk is not well defined or even well explored. Models and simulations have the potential to serve as effective shared representations for communicating the state of system development across the range of stakeholders. Their major uses will be to support coordination and integration of multiple viewpoints, to provide shared envisioning of operational concepts and predicted performance characteristics, and for system integration. Current examples fall short of achieving required goals and require further development. Recommendation: Conduct research on how to make the design rationale and the relationships among model and simulation assumptions, execution, and derived performance measures more transparent and understandable. Prototyping Training and Organizational Design The committee has explained the role of prototyping in the systems development process. One of the challenges in developing integrated systems is that of the balance of prototyping elements of the proposed system

OCR for page 296
Human-System Integration in the System Development Process: A New Look in isolation (in order to support parallel development and validation of the elements) and prototyping the collection of subsystems (in order to evaluate the overall behavior of the linked subsystems and trade-offs among them). In conventional systems engineering practice, both are done. The real challenge comes when the human operator, team, or organization must be considered in a more inclusive HSI design effort. It is clear from increasingly complex system development efforts that the earlier HSI issues can be addressed, the better. One way of addressing the challenge early is to create—like the early machine system prototypes—early prototypes of the people organization that will be interacting with the mediating technology or system hardware and software components. Organizational prototypes could take many forms. They could be simply verbal or descriptive concepts and theories, involving walkthroughs or talkthroughs with hypothetical organizational structures. Or the rules defining the relationships between organizational elements could be defined and individuals could stand in for each organizational element, a kind of interactive role-playing, and carry out prototypical interorganizational operations (i.e., follow predefined scenarios), while observing these rules and constraints (for summaries of successful applications of these types of approaches, see Bjerknes, Ehn, and Kyng, 1987; Bødker et al., 2004; Muller, 2007; Muller et al., 1997). Alternatively, organizational elements could be represented by computational models and simulations, including the rules and constraints for interacting with each other (National Research Council, 1998). For example, a synthetic teammate based on a computational model could serve as a training or operational aid, as well as a component prototype for system design (Gluck et al., 2006). It will mean that one has to know not only about all the interrelationships or links involved (human to human, human to machine, and machine to machine interactions) but also the nonmachine elements or nodes. Like any prototyping problem, the appropriate level of resolution (person, team, organization) will become even more critical as person-machine coproduction defines the success or failure of the teams, organization, and system involved. Similarly, it is important to consider prototyping the training program of potential team members as early in the design process as possible. This involves postulating alternative ways the training could be accomplished and testing their usefulness at varying levels of specificity as the design matures. Success in this approach will mean that the system design, organization, and training program all co-influence each other. This kind of work is at such an early stage of development that there are many unanswered questions: What does it mean to prototype an organization—what is the cur-

OCR for page 296
Human-System Integration in the System Development Process: A New Look rent state of the art? Are there differences with prototyping formal versus informal organizations? What are the implications for prototyping static versus dynamic teams and organizations? And how are artifacts—the nonhuman components of the system—accounted for? Are the prototyping issues different for individual, team, and organizational prototypes? What disciplines should be involved in supporting prototyping at an individual (cognitive psychology), team (social psychology), and organizational level (sociology, economics, anthropology, political science)? Recommendation: Undertake a review of the current state of the art in prototyping organizations. Define a set of requirements that effective prototyping methods should meet. Select a candidate relatively complex domain, perhaps a system of systems domain, and define alternative organizational structures that might be effective in this domain. Define alternative prototyping methods designed to span the range from very abstract to very specific. Apply the different methods to evaluate the different possible organizations for this domain and revise the methods until they meet the requirements proposed. Recommendation: Undertake a review of the current state of the art in prototyping training systems. Define a set of operational domains and compare training requirements. Examine use of synthetic agents in the development of training prototypes. Methods for Evaluation We have discussed two classes of evaluation methods: risk usability evaluation and risk analysis. Here the committee provides research and development objectives in both areas. Improve the Use of Usability Objectives The quantification of usability goals through the use of usability objectives is a recognized human factors and HSI best practice for many kinds of systems. But their use is not employed very often or consistently. The main goal of specifying usability objectives (also known as usability requirements, usability goals, performance goals, human factors requirements) is to create a metric that can be applied during usability testing as a way of having quantitative acceptance criteria for the test. Usability objectives are one way to create a quantitative quality-related goal and avoid qualitative conclusions that are sometimes claimed about devices (e.g., “This device is user-friendly”). Typically, quantified usability objectives include

OCR for page 296
Human-System Integration in the System Development Process: A New Look Human performance goals (objective goals), such as task completion time, success rate or error rate and type, learning time, and accuracy. Efficiency (number of total steps and missteps), such as number of references to instructions or online help. User satisfaction (subjective goals) using such approaches as rating scales (Likert, e.g., agree or disagree or comparative ratings) and semantic differential (pick rating between two opposite adjectives). In systems in which usability objectives are relevant, they should be validated as part of customer requirements (using common market research techniques, such as interviews, surveys, and focus groups) and compared with competitive benchmarks (usually obtained from published studies or from comparative usability testing of best-in-class competitor’s products). Only a few critical task-related usability objectives typically are necessary. Examples of quantitative usability objectives or goals are 90 percent of experienced nurses will be able to insert the infusion pump tubing set on the first try with no instructions. And 100 percent will be able to correct any insertion errors. 90 percent of experienced anesthesiologists will be able to calibrate the cardiac monitor within 2 minutes with no errors. Experienced operators working in port security will be able to detect potential dangerous substances with a sensitivity of d’ = 3 or greater. Unmanned aerial vehicle operators will be able to fly 3 planes at the same time in level flight and be able to land the 3 planes within 15 minutes, with no more than a 5-percent failure rate. 80 percent of experienced maintenance technicians will rate their satisfaction with the usability of device X as 7 or higher on a 10-point satisfaction scale. After reading the quick reference card, 90 percent of experienced clinicians will be able to properly configure the display on the first try to show the two ECG lead traces. 80 percent of experienced intensive care unit nurses will prefer the readability of the display for the latest generation ventilator monitor compared with the existing monitors. 95 percent of technicians with no prior experience with this type of network management system will achieve the target mastery level in 2 or fewer hours of use. Recommendation: For cases in which usability objectives have been shown to be useful, conduct research to develop better ways to investigate, set, and use them as acceptance criteria. This research would

OCR for page 296
Human-System Integration in the System Development Process: A New Look specifically show the value and limitations of usability objectives in achieving overall project goals. Specifically: Improve methods for demonstrating when usability objectives are valuable by surveying DoD and commercial projects on their successes and failures in using usability objectives and collecting examples of usability objectives from surveys and literature reviews; create a taxonomy of usability objectives. Improve methods for creating and setting usability objectives by surveying methods that have been used and their strengths and weaknesses; conducting experimental research on the relationship of using risk-management techniques, like failure mode and effects analysis and fault tree analysis, to set utility objectives and whether these projects are successful and meet project/mission goals; and searching the literature in other domains, such as software, electrical engineering, and the like and how they have used quantifiable performance objectives and how they set and validate them. Improve methods for validating usability objectives by surveying validation methods that have been used and their strengths and weaknesses and conducting literature reviews of techniques in other domains, such as marketing research, used to validate their objectives. Improve methods for using usability objectives as a subset of project acceptance criteria by surveying techniques (including their strengths and weaknesses) for using usability objectives as acceptance criteria, including hypothesis testing and appropriate statistical techniques that have been used. Maximize the Cost-Effectiveness of Usability Evaluation Although usability evaluation methods are widely used, no systematic and generalizable research has been carried out on the study size, scope, or protocols that cost-effectively identify the most important usability problems. Nielsen et al. (1994) analyzed the results of usability studies in the early 1990s to produce a formula to relate the number of test participants to the proportion of usability problems identified. This has been criticized as being applicable to only a limited class of products. Molich and Nielsen (1990) have shown that different usability evaluation procedures identify different subsets of problems; however, there is not good matching of problems with evaluation procedures. Furthermore, it is rarely cost-effective to evaluate every permutation of user type and task. There is little applied research evidence and few practices to assist a practitioner in deciding what number of studies to reduce the risk of

OCR for page 296
Human-System Integration in the System Development Process: A New Look human system mismatches are cost-effective in a particular development environment, or to determine which groups or strata of users to include. Market researchers have developed efficient methods from sociology for defining and using segmented or stratified samples, and there has been a small amount of research in human-computer interaction (principally by Siegel and Dray, 2001) to integrate these market-oriented methods with traditional methods. Recommendation: Conduct research to generalize the sample size formula developed by Nielson so that it can be applied to a wider range of products and systems, including such factors as system complexity, job function diversity, end-user demographics, and other relevant factors. Recommendation: Conduct research to understand which evaluation procedures are most appropriate for different types of products and systems, and how the evaluation procedure can be refined to maximize the number of problems identified most cost-effectively while producing valid and reliable results. Recommendation: Conduct research to understand how to choose culturally appropriate evaluation methods, how to treat each method as a lens on a potentially larger set of usability problems, and how to translate from the constraints of a particular evaluation method into a more general or canonical description of the usability problems that were discovered and clarified by that particular method. Recommendation: There is often a shortage of skilled personnel to carry out usability evaluations. Conduct research to establish whether members of a development team without a formal human factors background could be trained to carry out simple usability evaluations that produce valid and reliable results or, failing that, to understand the trade-offs in data collection and quality when HSI methods are carried out by untrained practitioners. Recommendation: Conduct research to establish how precisely the evaluation procedure needs to be specified to ensure that two organizations will produce acceptably similar usability measures for summative evaluation. Identify and Assess HSI Risks It is often stated in the HSI discipline that usability or human factors risks that are not addressed in the engineering design process are the basis

OCR for page 296
Human-System Integration in the System Development Process: A New Look for catastrophic errors. The profession tends to fall back on the history of well-known events, such as Three Mile Island, Bhopal, and the Vincennes downing, to illustrate the perils of failing to address human-system issues in design. While these examples can be compelling, they do not provide a rigorous basis for understanding risks in developing systems or for analyzing potentially catastrophic error conditions that may result from human operation. There are many human error classification schemes, but they tend to be locally focused and do not scale up to system-wide implications. We envision the initial research activity resulting from this recommendation to be developing a comprehensive database of HSI risks that are described at multiple levels and from multiple perspectives, from the initiating activity (e.g., cognitive error) to a system or society-wide result (e.g., melting the core of a power plant). This is both a theoretical and a practical research activity, requiring the integration and extension of various error classification schemes, with larger scale systems impacts, such as costs, malfunctions, rework, among others, and multiple theoretical and even political frameworks. We see this research activity as going well beyond the typical cost justification exercise for human factors engineering and resulting in a systems model of HSI risks. Recommendation: Conduct research to develop a robust HSI risk taxonomy and a set of methodologies for analyzing and comparing relevant risk representations and conflicting values. The general nature of the problem is to define the confluence of human and system factors that may align to create operational problems that exceed the design basis of the system or result in operations that were totally unanticipated (Reason, 1990, 1997). Such features have been referred to as “emergent” in discussions of systems of systems, and that is really the principal focus of this research—linking the human and hardware and software systems with analytic techniques that can better identify extreme situations. Incorporating the concepts of the relatively new domain called resilience engineering (Hollnagel, Woods, and Leveson, 2006) will help to move this approach forward. Recommendation: Extend traditional fault tree and risk analysis techniques to better identify the “boundary cases” that may lead to extreme operational consequences. The benefit to the HSI field of conducting this research will be to establish a more robust basis for risk analysis and design than currently exists today. The error taxonomies are a start, but they tend to leave off where theoreticians stop—well before examining the linkages in complex systems

OCR for page 296
Human-System Integration in the System Development Process: A New Look during the design process. The overall vision of this research recommendation is that the results will place HSI risk analysis on a more even footing with well-accepted risk methods, such as the probabilistic risk analysis work performed in designing complex process plants, and they will extend the traditional fault analysis techniques to identifying and addressing situations that are beyond the typical design basis faults. Improve the Communication of Risk The analysis of risk must be done systematically, with great attention to use error or operational risk, business risk and mission risk, and (when appropriate) societal risk. In this report, the theme of risk reduction is mentioned quite often. Techniques such as failure modes effect criticality analysis and fault tree analysis are recommended to analyze and control risk. These methods have been in use for many years, but they suffer from methodological problems, mostly involving how to make reliable estimates of risk parameters such as fault likelihood and severity of the consequences. Another major issue concerns the weaknesses in the ways these project and user risks are communicated to decision makers and other stakeholders, as well as the political processes that may required to reconcile and integrate views of risks across multiple constituencies that may have different perspectives on systems and their implications (e.g., to achieve a satisficed solution). Recommendation: Conduct research studies to show the value of improved assessment and shared representations that quantify the risk level for improved communication of business and operational risks to management and development team stakeholders. Specifically: Survey communication techniques in other domains, such as advertising, sales, and news, and categorize success factors that could apply to business and operational risk communications. Conduct literature searches and analysis of successful communication techniques used in other domains that might be applicable to risk-management communication stakeholders. Conduct experiments comparing different risk communication techniques, for example, do risk estimation calibration exercises to improve risk communication as measured by changes in operator or decision-maker behavior. Recommendation: Support applied interdisciplinary investigations into the communication, representation, and negotiation of risks and related

OCR for page 296
Human-System Integration in the System Development Process: A New Look issues, with the goal of assisting conflicting parties in mutual understanding and satisficed decision making. Identify and Assess HSI Contributors to System Adaptability and Resilience While humans are often viewed as the “weak-links” in systems that contribute to errors and risk, there is a growing body of literature that has shown that people in fact play a critical role in system resilience—the ability of systems to operative effectively in the face of unanticipated disturbances. Individuals, teams and organizations contribute to system resilience by planning for, recognizing and adapting to perturbations and surprises—especially ones that fall outside of the range of situations that the system was designed to handle (e.g., Carthey, deLeval and Reason, 2001; Weick and Sutcliffe, 2001). Alternatively, the individuals and management policies can be the detriments to resilience. This has led to a newly emerging area called Resilience Engineering that attempts to advance the study and design of systems that exhibit resilience (Hollnagel, Woods, and Leveson, 2006; Woods and Hollnagel, 2006). More research is needed to understand the role people play in contributing to or inhibiting system resilience, and how new tools and technologies can be deployed to enhance people in the former role. Recommendation: Conduct research to understand the factors that contribute to system resilience, the role of people in resilient systems and how to design more resilient systems. Some of the key questions that need to be addressed include the following: What kinds of knowledge and strategies enable people (particularly experts) to catch and recover from error and adapt to unanticipated situations? What methods can be used to analyze, measure, and monitor the resilience of organizations, systems and systems of systems? What traits and metrics enable systems to be developed and evaluated according to their adaptability and resilience? What methods can be used to model and predict the short and long-term effects of change on adaptability and resilience?