Although interest in understanding the role of humans in systems and accommodating that role in design has a history of more than 60 years, there has been a continuing concern that, in each phase of development, the human element is not sufficiently considered along with hardware and software elements. When information about the performance characteristics and preferences of operators and users is not introduced early enough in the process, there are higher risks for both simple and catastrophic failures in the systems that are produced. This leads to additional costs required to revise the design late in the development cycle and even sometimes to revisions after it has been fielded. Human-system integration (HSI) is concerned with ensuring that the characteristics of people are considered throughout the system development process with regard to their selection and training, their participation in system operation, and their health and safety. It is also concerned with providing tools and methods meeting these same requirements to support the system development process itself.
This volume provides a vision for integrating an understanding of human capabilities and needs into system design using an incremental model of systems engineering development that continually assesses risks, including risks associated with the human element, at each phase of the system development effort. The chapters present a large variety of methods (1) for describing human capacities, limitations, and needs, their tasks, and the environments in which they work and (2) for characterizing and evaluating alternative designs that require some form of human-system interaction. In the context of developing a single system, these methods are extremely ef-
fective for providing needed information in a timely manner when applied by trained human-system design professionals. Ineffective and inappropriate application can be attributed, in part, to lack of communication within the organization and the development team and a dearth of fully trained professionals as team members. Additional methods and approaches are needed to complement the existing methodology as systems become more complex and the focus shifts from the design and operation of individual systems to systems of systems.
A brief history of key events in the development of human factors appears in Box 1-1. Some of these events were driven by the interest of the behavioral science community; others came about as a result of accidents
Events in the Growth of Human Factors
and safety concerns. In recent years, efforts to effectively incorporate humans into the system have been referred to as human-system integration. Two important features of the HSI approach are (1) a user focus on all aspects of the systems definition, development, and deployment stages and (2) the combined application of human-related technologies to the HSI domains (Booher, 2003a, p. 7). A key element of the HSI approach is the coordination and integration of the HSI domains at each system life-cycle phase (U.S. Department of Defense, 1999). The HSI domains cover issues of manpower, personnel, training, human factors engineering, safety, health, and survivability. While the committee pays particular attention to the integration of human factors engineering in the system life cycle, we also explore approaches to integration across the HSI domains. It is important to note that we are concerned with the application of human-system integration in the commercial context as well as in the military context.
In this report we use the term “human-system integration” to refer to the design activities associated with ensuring that the human-system domains described above are considered in concert with all the other design activities associated with the systems engineering process, so that the resulting designs are truly responsive to the needs, capacities, and limitations of the ultimate users of the systems. Although human factors engineering is but one of the HSI domains, the one concerned with providing the methods and expertise to take account of human performance capacities and limitations in formulating effective system designs, it receives particular emphasis in this report because the methods appropriate to it are often the same methods needed for the other domains. Human-system integration is also concerned with the design process itself. The design process requires humans—stakeholders and design team members—with their own performance capacities and limitations, and with diverse interests, to work together. It is important to ensure that the tools and methods supporting that process meet the requirements of human-system integration as well.
One motivation for undertaking this study now is that industry and government are finding profound changes in the nature and complexity of the systems they seek to develop and at the same time are challenged to shorten the development cycle for new systems. There is pressure to reduce the staff required to support system operation, and this leads to increases in automation. However, not all automation actually reduces required staffing. Sometimes automation changes the job requirements and takes away the hands-on knowledge that has proved to be so useful for maintaining “situation awareness.” Sometimes it actually creates more work because now the automation, as well as the system itself, must be monitored and
controlled. Sometimes it reduces the reliability and trustworthiness of the overall system and increases the requirements for back-up personnel. System designers, like people in general, can be subject to an “over-confidence” bias, focusing on the potential benefits of new technology while failing to anticipate the complex interactions and new problems that may emerge (Feltovich et al., 2004). This has been referred to as the “envisioned world” problem (Woods and Dekker, 2000). There is an urgent need for improved HSI methods and tools that will enable system designers to anticipate and head off potential problems earlier in the design process (Woods, 2002).
Considering the design of individual interface workstations in isolation is no longer enough. Today’s complex systems are operated by teams of individuals whose interactions must be taken into account. Even considering single systems is not enough. Currently there are requirements to operate multiple systems—systems of systems—in interaction with each other. The military is particularly concerned with systems of systems, although they are of equal concern in civilian industry (e.g., hospital systems, complex interlinked communications systems). Furthermore, many of these systems of systems are adding an organizational component and respective complexities to the technological and personnel complexities already inherent in complex systems. Finally, the emergence of service-oriented architectures and the approaches called “Web 2.0” to combinations of functionalities add to the immediate complexity and potential interdependencies of systems and their services.
The field of design is also undergoing rapid change at this time. There is continued pressure to reduce the design cycle time. Software and hardware development methodologies supporting the design process are proliferating, but there is little understanding of which tools and methods are best for which purposes. Similar methods and tools are created by different communities of practice with little awareness of the tools and best practices in the related fields. There has been no comprehensive framework to organize competing methods, and, as a result, comparisons tend to be situational with correspondingly limited generalizability.
In spite of this long history, and in part because design continually faces new challenges, there are many examples of systems that have either failed entirely or have been adopted despite their inadequacies because of the need for their capabilities. Often the reasons these adopted systems were considered unsuccessful are because they failed to meet the requirements of the human users—they required unreasonable workload, induced psychological and physical stress, or resulted in costly human error. They failed because their developers had inadequate understanding of, or overlooked consideration of, the unique capacities and limitations of people. Examples include (1) military command and control vehicles for which the requirement for operation on the move had to be dropped late in the program, because
vibration and motion-induced sickness in the operational crew was found to be unacceptable; (2) the costly abandonment of a new air traffic control console before it was introduced into the workplace because it was unreasonably complex and difficult to operate; and (3) the confusion that arises from controlling a home media system with five different remote controls, or even with one “universal” remote with five different modes.
By the same token, there are examples of effective systems that have succeeded specifically because of the attention that was paid to human-system integration during system development. A primary example is the current generation of Navy Tactical Decision Support systems that was designed as part of the TADMUS (Tactical Decision Making Under Stress) program. This successful program was initiated in response to the tragic downing of an Iranian Air Bus in 1986 by the USS Vincennes, caused in part by poor human-system integration. TADMUS was a success because it took a human-centered design approach; the research and development (R&D) supporting it was assigned high priority by senior officer staff, and it brought researchers together with operational personnel. The confluence of these considerations gave the project a high profile and has influenced much subsequent design in the Navy.
Another outstanding example of success is the Army Comanche Helicopter program. By modifying the acquisition program specifically to recognize human-system interaction as an integral part and by introducing HSI requirements early and throughout the acquisition program, the government-industry team substantially improved overall human-system performance while realizing a cost saving of 40 times the cost of the HSI investment. This program was abruptly cancelled in 2004, the cause being attributed to “challenges of software integration” not human-system issues. The reader interested in more detail about the human-system features of this program or further examples in Army programs is referred to the excellent review by Booher and Minninger (2003).
An example of the commercial importance of human-system interaction in risk analysis and risk avoidance is the precise human-performance modeling done by the NYNEX Science and Technology organization in the evaluation of a proposed new operator services workstation (Gray et al., 1993). Using keystroke-level analysis and parameter estimation, the NYNEX team was able to show that the proposed new design would paradoxically reduce human productivity. This early analysis, as well as subsequent decisions regarding the product, was credited with saving $2 million annually. In a similar R&D project, HSI observations and analyses of over 500 directory assistance calls at US WEST helped to correct the first voice recognition application for directory assistance (Muller et al., 1995). Initial outcomes showed that the technology-assisted calls took significantly longer than conventional calls and resulted in extremely nega-
tive customer response. The information gained through qualitative and quantitative analyses showed how to reverse the negative work outcomes through a simple redesign of the dialogue between customers and the voice recognition technology. In addition to obtaining the labor savings that were promised by the technology, the redesign also improved the customer responses, leading to the voice recognition technology that is part of nearly all U.S. directory assistance calls today.
The reasons some system designs fail are multidimensional and complex. Here are a few that the committee has identified:
Failure to introduce human factors considerations early enough—in some cases needs and requirements are forecast even before the formal system acquisition process begins.
Lack of effective methods and tools to predict direct impacts and ripple effects of envisioned future systems early in the design process, particularly in the case of large-scale systems and systems of systems with diverse elements that can interact in complex, difficult to anticipate ways.
A tendency to focus on people as the error-prone weak links in a system that needs to be “automated away,” rather than as important contributors to overall system resilience that enable systems to adapt to unanticipated situations in need of support in that role (Hollnagel, Woods, and Leveson, 2006).
Failure to apply known good methods routinely in practice, such as those specified in Department of Defense (DoD) and international quality standards (ISO) and recommended practices.
Lack of ability to abstract generalizable concepts and principles, as well as transportable models, across application contexts, limiting the ability to grow a solid body of human factors design knowledge.
Lack of synergy between research and practice, with the result that practitioners are not sufficiently aware of relevant research and research is not sufficiently informed by the body of knowledge gained from practice (Norman, 1995; Woods and Christoffersen, 2002).
Lack of adequate HSI metrics to support progress monitoring, pass/fail reviews, and system-level evaluation.
Inadequate or poorly documented data on relevant human task performance.
Lack of effective use of methods and tools to support the HSI process.
Difficulty of cost-justifying resource allocation to study and resolve human-system integration issues.
Inadequate education and training of system developers to sensitize them to the HSI issues.
Limited opportunities for the education of HSI specialists.
Failure to assign resources as a result of lack of awareness that specific resources are needed to address HSI concerns.
Conflicting requirements of various stakeholders in the system development process.
Insufficient advocacy for consideration of human-system integration at the top level of relevant organizations.
This list, developed independently, is quite consistent with the list cited in Booher and Minninger (2003). As previously mentioned, an underlying issue regarding system failures and the inadequacies associated with current system development and human-system integration may be that many systems should actually be regarded as systems of systems.
A consensus view held by the fellows of the International Council on Systems Engineering (INCOSE) is that a system is a collection of different elements that together produce results not obtainable by the elements alone. The elements, or parts, can include people, hardware, software, facilities, policies, and documents; that is, all things required to produce system-level results. The results include system-level qualities, properties, characteristics, functions, behavior, and performance. Furthermore, INCOSE thinks the value-added of the system as a whole, beyond that contributed independently by the parts, is primarily created by the relationship among the parts; that is, how they are interconnected. What has changed in recent years is that, increasingly, these parts are systems themselves.
Over a dozen different definitions of a system of systems have emerged, emphasizing different aspects of product, process, and personnel (Jamshidi, 2005; Lane and Valerdi, 2005). Two fairly definitive treatments, Maier (1998) and Sage and Cuppan (2001), identify their distinguishing features as having component systems that (1) achieve well-substantiated purposes in their own right even if detached from the overall system; (2) manage, in large part, for their own purposes rather than the purposes of the whole, plus (3) exhibit behavior, including emergent behavior, not achievable by the component systems acting independently; and (4) involve the role of a lead systems integrator (LSI) with sufficient capability, authority, and responsibility to architect, acquire, and integrate the component systems into a satisfactorily performing system. Levis (2006) adds a condition that component systems may be added or removed while other parts of the overall system are operating.
Systems of systems may differ in several aspects, such as the number of separately managed component system owners, the number of separate missions (emergency medical, search and rescue, crisis response, insurgency suppression, limited or full-scale warfare), and the degree to which the component systems are newly developed or already developed. Particular challenges for human-system integration are multiowner, multimission systems
of systems with numerous already-developed systems, which are likely to have incompatible human-system interfaces, operating modes, assumptions about operator capabilities and underlying infrastructure, and degrees of mission criticality or safety criticality.
As systems become increasingly complex, there is a corresponding increase in complexity in the systems (i.e., enterprises) that develop, operate, and sustain these systems in a global context (Nightingale and Rhodes, 2004). Traditional methods related to systems engineering, enterprise engineering, or enterprise architecting are inadequate for designing and managing systems within systems and systems within enterprises. Broader and more holistic methods within an engineering systems perspective are needed (Nightingale and Rhodes, 2004). While HSI methods offer considerable contributions to analyzing and designing complex systems, methods related to systems of systems and enterprises are still inadequate.
An example of a system of systems under development is the Air Force Falconer Air Operations Center in Arizona; in Central Command, Air Operations Center is described in The Integrator (Mayer, 2005) as follows:
The Electronic Systems Center developed Falconer AOC “system of systems” is the Combined Forces Air Component Commander’s weapon system for commanding air and space forces. A Falconer operating with a Theater Response Package—meaning fully equipped and manned for a theater war—can manage and control up to 3,000 air sorties a day.
CHARGE AND SCOPE
Many methods, tools, and techniques are available in the literature for addressing various aspects of human-system integration, and there are several methods textbooks and standards:
Handbook of Human Factors and Ergonomics Methods (Stanton et al., 2005);
Handbook of Human Factors and Ergonomics (Salvendy, 2006);
Handbook of Human Systems Integration (Booher, 2003b);
A Guide to Task Analysis (Kirwan and Ainsworth, 1992);
Handbook of Human Factors Testing and Evaluation (Charlton and O’Brien, 2002);
Systems Engineering: System Life Cycle Processes (International Organization for Standardization, 2002);
Software Engineering: Software Product Quality Requirements and Evaluation (International Organization for Standardization, 2006); and
Handbook of Systems Engineering and Management, revised edition (Sage and Rouse, in press).
These claim to offer a systematic approach, but each has serious deficiencies. The methods tend to exist in isolation. Nemeth (2004) has assembled existing methods into a coherent book, but still there are gaps in the existing methods and tools, and more work is needed to improve their integration into a coherent methodology with a suite of tools that would support such an integrated methodology. These are the issues we address in this report. Specifically, the charge to the committee is to
provide a comprehensive review of issues involved in design throughout the system life cycle that need to be addressed by a consideration of human cognitive and physical performance characteristics. This review will be used as a framework for further analysis of methodologies.
evaluate the state of the art in human-system engineering and (1) product development processes, (2) product design methodologies, and (3) product design tools.
develop a vision for an integrated, multidisciplinary, generalizable, human-system design support methodology and tool set. Identify a set of core methods and tools needed to support design activities associated with a variety of systems.
recommend a research plan suggesting how to achieve this ideal.
Although the U.S. military requested this report, our goal is to provide recommendations that are also relevant to other government departments as well as industry, including the process control, manufacturing, and service industries. Furthermore, the committee defined the scope of its review and analysis to include environmental factors, organizational and work context, and matching the system to users’ needs as well as taking account of human cognitive and physical capacities and limitations. Many audiences have a vested interest in, or will benefit from, better methodologies for making systems useful and relevant, such as the following:
acquisition and program managers,
human factors/usability professionals and those representing other MANPRINT domains,
policy makers and regulators, and
In preparing this report, we tried to remain sensitive to these different constituencies and are hopeful that various chapters and recommendations are relevant to different subsets of them.
The Military Sector
Both the Army and the Navy have active HSI programs that were created to inform system development efforts about the human side of system performance and the decisions that are required throughout the development cycle to adequately consider human roles and contributions. The Air Force is in the process of implementing a similar system. The Army’s program, known as MANPRINT, has been operating since the early 1980s; the Navy’s system, SEAPRINT (Systems Engineering, Acquisition, and Personnel Integration) was formalized in 2003 to establish a MANPRINT-like approach to Navy system design and acquisition. The military services have control over all decisions related to development, fielding, staffing, and operation of their new systems.
MANPRINT is “a comprehensive management and technical program designed to improve total system (leader, unit/soldier, and equipment) performance by focusing on the human requirements for optimal system performance” (U.S. Army, 2000). It consists of seven domains: manpower, personnel, training, human factors engineering, system safety, health hazards, and soldier survivability. SEAPRINT “provides the Navy with a single, integrated performance-based process that addresses all aspects of Human-System Integration—from capability definition through personnel delivery” (U.S. Navy, 2005). It also includes seven domains, differing from MANPRINT by combining safety and health and adding a domain labeled habitability. Both programs are compatible with the seven HSI domains listed in the defining DoD Instruction 5000.2 Operation of the Defense Acquisition System (U.S. Department of Defense, 2003a).
Representatives of the Army and the Navy have specified two major problems in effectively applying these programs. The first is getting inputs from the required specialists to be considered early enough and at all stages of the system development life cycle. The second is the inability to effectively integrate HSI efforts across domains. In addition, many HSI analyses are applicable to more than one domain, and decisions made in one can significantly constrain or influence decisions in another. Despite these opportunities for integration, those working in each domain tend to function separately, applying their own methods and tools.
The domains of manpower, personnel, and training (MPT) encompass both supply and demand issues. Supply involves the sources of personnel, their background, and how they will be trained. Demand involves the determination of the number and skill levels of personnel required for each job specialty. The committee’s focus is on the demand side, where manpower, personnel, and training impact design through their implications
for human factors requirements. Managing workload is a critical design issue in human-system integration. The number and type of personnel are intimately tied to workload requirements. Similarly, there are important trade-offs between the usability of a system and the requirements for training. How does one consider the trade-offs among staffing levels, personnel quality, personnel turnover, training requirements, and system design? Complex systems represent a usability challenge that can be solved by better design or by more extensive training. Both of these kinds of issues influence the manpower, personnel, and training investment required in new system development. Although we do not address the supply side directly, it is important to understand the approaches and decisions in supplying manpower, personnel, and training as a context for the committee’s work.
Manpower refers to the number and type of personnel who operate, maintain, support, and provide training for systems. Input concerning the number of personnel needed comes from policy makers in the Pentagon at the top and from manpower analysts at the unit and the system levels at the bottom. Although manpower assessment techniques are available to determine the appropriate number of operators/people for each piece of equipment/task, these techniques are often not used because they are labor-, skill-, and knowledge-intensive. A good example is IMPRINT (Improved Performance Research and Integration Tool), a modeling tool developed by the Army Human Engineering Directorate and used by the Army and the Navy for manpower planning and to inform human-system design decisions (Allender et al., 2005). This tool requires substantial training to be used effectively, and personnel with this training are not always readily available.
Another issue concerning the adequacy of input from the bottom up is that results of the analyses are often politically inconvenient or are overshadowed by budget constraints or logistics requirements. Furthermore, expertise in unit-level manpower analysis is rare. Decisions made regarding the number of personnel can have an important influence on the requirements for personnel basic abilities, system features, and training.
Personnel refers to the human aptitudes, skills, and experiences required to perform the jobs of operators, maintainers, and support personnel. The Services apply a standardized set of entry requirements that have changed little over the past decades. The supply of enlisted personnel to the military primarily comes from 18- to 24-year-olds in the general population who have received a high school diploma and can achieve an acceptable score on the Armed Forces Qualification Test (AFQT). The Services are almost completely staffed by applicants with scores in the higher range on the AFQT. The AFQT score and scores on combinations of subtests in the full Armed Forces Aptitude Test Battery (ASVAB) are used to determine qualifications for various jobs. The actual assignment to a job is also driven
by the availability of a position opening. It is important to note that the philosophy of the military services is to recruit motivated individuals with an appropriate ability level—the skill and the knowledge needed for each military job is then developed through military training.
Studies examining how well the ASVAB subset scores predict job performance have shown only a weak relationship (National Research Council, 1991). More recently, the level of prediction from the ASVAB to job performance has been further reduced by the fact that, although jobs are changing with the introduction of technology, the old job descriptions remain in place.
Training prepares personnel to perform the tasks necessary to meet the mission or goals and objectives of the system. Development of training requirements, methods, curricula, and training system design are important parts of the overall system design process. The length and intensity of training depends on the background, ability levels, and learning styles of the personnel in the training class; the complexity of the system; and the level of skill and knowledge needed to ensure the desired level of performance speed and accuracy. Some training is designed for individual task performance; some for team or unit-level performance. An important input to effective training is a task analysis that identifies the skills and knowledge needed for acceptable performance—this analysis requires updating as the system configuration changes or as new automation is introduced. Although there may be some task analysis requirements that are unique to the training domain, the methods for creating this task analysis are substantially the same as those used for other system development purposes discussed in this report. Inadequate training can result when work and task descriptions are outdated. Training deficiencies may also result from failure to allocate the necessary training time and budget, lack of flexible training schedules needed to meet learning requirements, and lack of useful proficiency criteria.
Manpower, personnel, and system design decisions should take into account the level of training needed and the feasibility of delivering that training in the allowable time frame.
The Private Sector
The private or commercial sector is more difficult to characterize because of the wide variety of systems and products, of the differences in approaches to human-system design, the central role of marketing, and because in the commercial product environment projects are more likely to be cancelled if milestones are not met in the early stages of the development process. Companies generally develop products for use by other companies, groups, or individuals. Some products require extensive training, and
some are subject to safety regulations. Development efforts are driven by market forces; competition; time constraints; safety and liability exposure; and by customer characteristics, requirements, and budgets. HSI activities in the private sector focus on a number of activities, including, but not limited to, market research, risk analysis, product planning, development of product lines and platforms, usability testing, and product evaluation (Rouse, 2003).
Private-sector products cover a wide range of sizes, complexity, and level of human involvement. On one hand, for example, there are complex systems in manufacturing, process control plants, nuclear power plants, network management, and air traffic control systems. These systems include large numbers of personnel performing a highly structured set of jobs requiring technical skills and knowledge. In this context, considerations of manpower, personnel, and training are relevant. On the other hand, there are many smaller scale single-user systems (e.g., commercial products) for which training is critical but manpower and personnel issues are less relevant. Many commercial products are released for which user training is impractical, so they need to have self-evident, intuitive user interfaces to be successful; indeed, for web-based commercial services, user training is impossible, and ease of use becomes a significant, make-or-break attribute.
Many private-sector companies perform a user analysis, or a similar assessment of the intended user, in the early stages of design, using such methods as contextual inquiry, scenarios, task analysis, cognitive task analysis, ethnography, or participatory analysis (these are discussed in Chapter 6). This analysis of users’ capabilities is similar to the military’s personnel assessment. For some products, such as hospital medical devices, the user can be expected to have advanced skills and knowledge. When designing products intended for use by the general population, companies must account for a wide range of skill levels. With increasing regulatory pressure, companies are also designing for people with a range of disabilities, including visual, auditory, motor, and cognitive/developmental disabilities. A product that is poorly matched to a user’s capabilities may create frustration for the user, lower sales, increased need for training and customer support, and an overall increased cost. A product that is created to serve multiple types of users often has additional and unanticipated reach into new markets or applications.
Training takes many forms in the private sector. Most products include such training aids as user manuals, help menus, and product support help lines. For complex or difficult to operate systems, a formal training program may be required. Alternatively, online training may be needed. Training requirements may be established as part of the design process or may be put into place after a product is on the market.
New developments challenge these simple, old ways of thinking about
development and deployment—especially the concepts often referred to as Web 2.0 (e.g., O’Reilly, 2005), in which each application provides a standardized interface (typically XML) to other applications, and new services can be created as through simple interfaces among these existing applications (making a “call” between applications, similar to a subroutine call in a conventional program architecture). The standardization of data formats and protocols among these services allows very rapid prototyping and testing of new service concepts, and these integrations can lead to user experiences that appear to be entirely new concepts and functionalities. Each such web site or module uses these standardized formats to offer “services” that can be called from other web sites or modules—hence the more formal description as service-oriented architectures (Erl, 2005; SOA, 2006). We list five classes of these new “social software” services here (Allen, 2004; IBM, n.d.; Teton and Allen, 2007; see also Chi et al., 2007):
Combinations of data from multiple services, creating new services and new user experiences.
Easily consumed updates or “feeds” from user-created dynamic pages called “weblogs or “blogs.”
Sharing of annotations of websites, pictures, music, and other web-addressable objects through “social tagging” of web resources in a shared database, as well as the evolution of user-created “folksonomies” as low-maintenance alternatives to high-cost enterprise taxonomies.
Sharing of dynamically updated personal information through person-centric shared databases.
Negotiation and co-creation of shared knowledge, accessible to millions of users, at user-constructed online encyclopedias.
These new service-oriented architectures present new challenges in several areas. First is the speed with which new services can be created: development time in this very open environment decreases from years to days. Second is the rate of change of the data in these new services, which can amount to many thousands of updates daily. Third is the decentralization of the “sourcing” and control of the information, which is typically contributed by thousands of people who do not necessarily have other ties or relationships to one another. Fourth is the current very loose security model for these services, which is likely to be tightened as the commercial and governmental uses of these technologies increase. All of these challenges highlight the need for input from users and analysis of the implications of these design alternatives for their human users, either before or while they are implemented. Without specific human-system requirements, the ease and speed of creation makes it even easier for designers to pursue their own clever but often inappropriate designs.
In addressing the charge, the committee identified several major themes that are woven through the chapters of this report. These include adopting a risk-driven approach to determining the need for HSI activity; tailoring the selection of methods to meeting time and budget constraints; developing and using shared representations for communication of issues and results among domains and disciplines; designing systems that can accommodate changing conditions and requirements in the workplace; and integrating HSI inputs across human-system domains as well as across life-cycle phases.
The committee proposes an incremental commitment model as a useful approach to system development. Although it is not the only model that could be used on future human-intensive systems and systems of systems, it serves as a reasonably robust framework for explaining HSI concepts and for evaluating these via a set of case studies presented in Chapter 5.
This model is based on five principles that are critical to success:
satisficing of system stakeholders (e.g., users, acquirers, developers);
incremental growth of system definition and stakeholder commitment;
concurrent system definition and development;
iterative system definition and development; and
The details of this model appear in Chapter 2.
Adopting a Risk-Driven Approach
A central focus of the incremental commitment model is the progressive reduction of risk throughout the system development life cycle with the goal of producing a cost-effective system in which all stakeholders are considered winners. Risk reduction is accomplished through the application of all relevant disciplines. In the past, the risks associated with human-system integration have often been neglected in the system risk analysis process. In this report we emphasize the importance of including human factors and HSI risk as an integral part of this process. Cost-effectiveness is achieved by focusing resources on high-risk aspects of the development while deemphasizing development phases for aspects of the system that are judged to pose a limited risk. Key elements of the model are the anchor points at the end of each cycle that call for stakeholder evaluation and commitment. These anchor points correspond to DoD system development milestone reviews.
Engineering development risks are realized when development is impeded by unforeseen difficulties in implementation or costly overruns. In contrast, HSI risks may be realized only at the conclusion of a system de-
velopment life cycle when the system is fielded. They may lead to (1) underutilization or disuse of a product or system because it is difficult, inefficient, or dangerous to use; (2) human error in the use of the product or system, resulting in delays, serious compromises in system performance, or higher operational costs; or (3) both. For safety-critical or defense systems, either of these risks can lead to catastrophic events, including serious injury or death. For the manufacturer of commercial products, loss of sales, product liability lawsuits, and product recalls are major potential results of failure to adequately consider HSI risks.
These operational stage risks are traceable to failures to fully integrate user needs and capabilities at earlier phases of the development cycle. To be effective, all risk-reduction approaches, including human-system integration, must be applied to identify and address risk reduction during early and middle stages of development. The use of such risk-reduction approaches allows developers (or stakeholders) to select one design approach over another, gain an understanding of unanticipated effects through simulation studies, and generally have a higher level of confidence that system development efforts are on track to meet requirements and avoid the operational stage risks of disuse, error, high costs, and lack of sales.
In this report we take the view that the analysis of HSI risks should be considered at the same level of importance as the risks that specific hardware or software functions will not be able to meet the required technical specifications. This consideration places HSI issues at the level of priority required to produce systems that will not fail due to poor attention to the MANPRINT variables of importance.
Tailoring Methods to Time and Budget Constraints
The committee recognizes that human-system integration is in competition with other system development activities for the resources controlled by the project manager. Sometimes the resource demands of the HSI team seem incommensurate with the project manager’s perceived benefits. This perception arises partly because much of the resource investment needs to occur very early in the process, yet the benefits are not harvested until late in the development process. Use of risk analysis to focus resources on critical development issues can help to ameliorate this concern. Nevertheless, the committee thinks that it is incumbent on the HSI specialists to tailor the application of their methodologies to the specific needs of a project. Most of the methods and tools described in this report are designed to be adjustable and scalable to meet the needs of specific projects.
Creating Shared Representations for Communication
Effective and efficient design requires meaningful communication among hardware, software, and human-system integration designers; among professionals in the domains of human-system design (e.g., personnel, manpower, training, human factors); and among designers, users, and other stakeholders. Just as an architect provides blueprints, perspective drawings, or physical models to communicate a design, when people from different perspectives collaborate in a design process, they bring various methods and tools to communicate effectively with other experts in the activity. In addition, each group often has its own mind set, language, and work practices. With so much diversity among the groups tasked with complex systems design, the potential for communication and collaboration failures increases if assumptions (and their associated mind sets) are not made explicit. Effective use of multiple shared representations to mediate the activities of these multidisciplinary teams can foster innovation and a more effective design process.
Shared representations “stand in” and mediate communication between and among people engaged in a collaborative process. From the HSI perspective, they can be stories, reports, spreadsheets, models/diagrams, prototypes, or simulations. Physical or electronic models of aspects of the human-machine system are shared representations that provide a bridge between research and design in complex systems. The act of modeling can help teams detect unintended relations and features and lead to new connections and ideas. Prototypes are one form of model that make explicit an aspect of form, fit, or functionality—they can range from simple sketches to full physical mock-ups. By predicting and highlighting potential performance limitations, computer simulations of the human-machine system are another form of model that can support shared understanding by the development stakeholders.
The committee thinks that a current impediment to effective identification of HSI issues and risks and utilization of the resultant recommendations is the often vague nature of the products of HSI analysis. We are therefore emphasizing the importance of shared representations that truly communicate effectively with the other engineering disciplines and project stakeholders.
Shared representations are useful at all phases of the system design life cycle and play an important role at the anchor points at which stakeholders are asked to make commitments and reach agreements. The chapters in Part II of the report describes a variety of shared representations, including stories and scenarios, prototypes, user models, and simulations. The use of these representations is further explored in later chapters.
Designing to Accommodate Changing Conditions and Requirements in the Workplace
New technologies provide new capabilities, and these often generate new expectations, roles, and ways of doing things that are not always anticipated ahead of time (Woods and Dekker, 2000). Unanticipated complexities can arise through increased system interconnectedness and interdependency, which create new sources of workload, problem-solving challenges, and coordination requirements. In turn, individuals in the system will adapt. They will exploit the new power provided by the technology in unanticipated ways, and they will create clever work-arounds to cope with technology limitations, so as to meet the needs of the work and human purposes. To accommodate changes and unintended effects, the system development process should be viewed as incremental and ongoing. It is important to continue observations and analysis, even after a system has been implemented, both to evaluate the validity of designers’ assumptions and to drive further discovery and innovation. For a system to remain work-centered over time, it must not only support the elements of work identified at the design stage, but it must also be able to accommodate elements that the initial design did not appropriately capture and be adaptable to meet the changing nature of the work. Systems need to be designed in ways to enable users to adapt the system to evolving requirements.
Researchers have argued for the importance of creating systems that afford the potential for productive adaptation to enable users to “finish the design” locally in response to the situated context of work. This idea can be extended to include not only local responses, but also adaptation of systems to keep pace with a constantly evolving world. The technologies of Web 2.0 represent an extreme version of this approach, emphasizing the importance of users as co-creators of information, co-editors of collections of information, and co-implementers of new features through the increasingly easy technologies that enable the aggregation of features and services into new functionalities, experiences, and utilities (referred to as “mash-up” technologies). In the latter sense, the design is never really finished. A significant challenge currently facing organizations is their ability to adapt to rapid and unpredictable change in more appropriate ways than their competitors, including the adoption of new technologies and business practices (Crisp, 2006). Changes in hardware and software must be accompanied by changes in the use of humans in the rapidly evolving systems.
The notion of designing for evolvability is discussed in more detail in Part II.
Integrating HSI Contributions Across Life-Cycle Phases and Human-System Domains
The primary features of the HSI concept are consideration of humans in the decisions made in each system life-cycle phase and the integration of inputs across domains dealing with the various human-related development issues at each life-cycle phase. These features have been stated by our military sponsors as critical considerations in effectively applying their programs.
Throughout the report we examine the role of HSI methods at each development phase and discuss how many of these methods provide inputs at several phases. Chapter 6 focuses on methods that are applied early in the life cycle to help identify opportunities, structure the scope, and characterize various aspects of the context of use from the perspective of human attitudes, capabilities, limitations, and needs. Chapter 7 carries some of these methods over into the design phases as well as introducing an additional set of methods. Chapter 8 focuses on evaluation methods and their role throughout each life-cycle phase. When possible, we provide examples of shared representations that can be used for communication among human-related domains as well as among those working with the human elements, the software elements, and the hardware elements.
Following this introduction, the report is divided into three parts. Part I: Human-System Integration in the Context of System Development consists of four chapters. Chapter 2 describes the system development process, Chapter 3 focuses on human-system integration in the system development process and the use of shared representations for communication, and Chapter 4 addresses HSI program risk. Chapter 5 introduces three case studies: uninhabited aerial systems, port security, and a commercial medical device. These cases were selected because they provide examples of an existing system, a developing system, and a vision for a future system. They are used throughout the report to highlight different approaches, methods, and tools.
Part II: Human-System Integration Methods in System Development contains three chapters characterizing HSI methods and tools. Each of these chapters provides an overview of the relevant methods, how they are used, the shared representations they generate, and their strengths and limitations. It is important to note that these chapters do not provide an exhaustive review but rather focus on the classes of methods that the committee