National Academies Press: OpenBook
« Previous: 1 Introduction
Suggested Citation:"2 Survey Methodology." National Research Council. 1992. Human Factors Specialists'Education and Utilization: Results of a Survey. Washington, DC: The National Academies Press. doi: 10.17226/1978.
×

2—
Survey Methodology

To survey the human factors community, two data-gathering techniques were used: a computer-assisted telephone interview (Appendix A) and a mail-in questionnaire (Appendix B). The computer-assisted telephone interview (CATI) was used to survey human factors specialists and the supervisors of human factors personnel. The mail-in questionnaire was employed to survey the directors of graduate programs offering specialized education in human factors. The methods used in the two surveys are described below.

THE COMPUTER-ASSISTED TELEPHONE INTERVIEW SURVEY

The purpose of this survey was to question human factors specialists and supervisors about their professional and job-related activities and education. The method of choice for obtaining this information was the computer-assisted telephone interview. During the last decade, CATI systems have become a standard method for conducting interviews because of the flexibility that they offer in comparison with self-administered questionnaires. In a CATI interview, neither the respondent nor the interviewer uses pencil and paper to record responses to questions. Instead, the interviewer contacts members of a preselected sample by telephone at a time previously agreed on. A branching interview protocol on the interviewer's computer screen prompts the interviewer to ask questions. Respondent answers are entered by the interviewer on a keyboard as either coded or free-text information.

The principal advantage of the CATI survey method is that it permits a questionnaire to contain branching questions that can be asked or not de

Suggested Citation:"2 Survey Methodology." National Research Council. 1992. Human Factors Specialists'Education and Utilization: Results of a Survey. Washington, DC: The National Academies Press. doi: 10.17226/1978.
×

pending on responses to previous questions. With this if-then branching structure, a line of questioning is continued by an interviewer if a respondent's replies meet certain criteria and stopped or switched to another line if responses meet other criteria. This is very difficult with a self-administered questionnaire, even when the respondents are highly motivated. Well-trained interviewers can follow complex questionnaires, but under the pressures of an interview even a highly skilled interviewer can make a large number of errors in either not asking all of the questions that should be asked or sometimes asking questions that should not be asked. The CATI method eliminates these sources of error and allows the interviewer to concentrate on communicating with the respondent. A CATI interviewer can also define terms and clarify questions for a respondent.

Questionnaire Development

The questions used in this survey drew on four sources of information: (a) questions developed by Sanders and his associates (Sanders, Bied, and Curran, 1986) in job-descriptive surveys of members of the Human Factors Society (HFS), (b) studies of the activities of human factors specialists done by the American Psychological Association for the Army Research Institute, (c) unpublished task analyses of the work of human factors specialists completed by the Human Factors Society (internal communication, 1986) and (d) questions suggested by a resource group from government, industry, and academia solicited by the panel. This resource group was selected to represent the different types of employment settings and work in which human factors specialists are engaged (see the acknowledgments for their names).

Using these sources of information, three working subgroups of the panel were instructed to develop specifications for separate sections of the CATI questionnaire. These specifications were then discussed by the full panel, formatted, and pretested on a small group consisting of potential respondents and interviewers at the Survey Research Laboratory who were later to conduct the CATI interviews. This process helped to pinpoint ambiguous and misleading questions. The questions were then revised on the basis of these respondent and interviewer comments. In all, the questionnaire went through four revisions before a final draft was reached.

Sampling

The aim of the CATI survey was to obtain a sample of all human factors specialists and supervisors to which questions about their work and education could be asked. Because the panel judged that it would need analyses broken down by employer type, respondent age, respondent sex, and other

Suggested Citation:"2 Survey Methodology." National Research Council. 1992. Human Factors Specialists'Education and Utilization: Results of a Survey. Washington, DC: The National Academies Press. doi: 10.17226/1978.
×

categories, a sample of at least 1,000 respondents representative of the population of human factors specialists and supervisors was necessary. Although several different designs could be used to obtain a sample of this size, the only feasible and economically realistic alternative was to draw a sample from an enumeration of known human factors specialists and supervisors. Unfortunately no such common list has ever been compiled. Therefore, the survey contractor, the University of Illinois Survey Research Laboratory, constructed a master list using three sources: (1) the 1988 membership list of the Human Factors Society, (2) the most recent membership lists of other professional associations in which some members were believed to be engaged in human factors activities, and (3) nominations of persons obtained from interviews with sample respondents drawn from the association lists.

A major limitation to using these types of existing lists is the inclusion of ineligible persons such as those who have retired or have changed to jobs in an area other than human factors. While some ineligibles were expected even in the Human Factors Society, this problem was greater for the other professional associations, organizations, and network sampling methods.

In addition to the Human Factors Society, 14 associations were identified that the panel believed would contain some members engaged in human factors activities or in their supervision. These associations were invited to participate in the survey by providing the survey contractor with membership lists that could be sorted on members interested in or engaged in human factors. Of the 14, 10 societies agreed to cooperate:

the American Nuclear Society,

the American Industrial Hygiene Association,

the Industrial Designers Society of America,

the Aerospace Medical Association,

the American Institute of Industrial Engineers,

the National Security Industrial Organization,

the American Society of Agricultural Engineers,

the Association for Computing Machinery,

the Acoustical Society, and

the Institute of Electrical and Electronic Engineers: Systems, Man, and Cybernetics Division.

In the opinion of the panel, most human factors specialists or specialist supervisors who are not members of the Human Factors Society are likely to belong to one or more of these 10 organizations.

Of the four societies that did not participate, three did not have information that identified members who had human factors interests or were human factors specialists or supervisors of human factors personnel. The four societies were:

Suggested Citation:"2 Survey Methodology." National Research Council. 1992. Human Factors Specialists'Education and Utilization: Results of a Survey. Washington, DC: The National Academies Press. doi: 10.17226/1978.
×

the American Society of Safety Engineers,

the Environmental Design Research Association,

the Society of Information Display, and

the System Safety Society.

Even if the members of these societies had been identifiable as human factor specialists or supervisors, it is likely that their number would have been so small as to have no appreciable effect on the results. It is also likely that at least some of this small number would also have been members of the Human Factors Society and thus available for sampling from its membership list.

The membership list of the Human Factors Society yielded 3,907 names, and those of the other 10 societies yielded a total of 12,552 for sampling candidates for CATI interviews (Table 2.1). From these two pools of names a sample of 1,027 was initially selected from the Human Factors Society list and another sample of 1,034 was drawn from the remaining 10 lists. The two samples were then checked for duplicates and people on the Human Factors Society list were excluded from the remaining lists. Those who were on more than one list were subsampled at a rate that was the inverse of the number of lists on which they appeared. This gave equal probabilities of selection to all sampled persons on the combined lists of the other professional societies.

Two approaches to determine eligibility were considered. One was to simply ask respondents whether they considered themselves to be a human factors specialist, leaving unspecified the meaning of that label. The major

TABLE 2.1 Characteristics of the Sampling Candidates for CATI Interviews

 

Human Factors Society Members

Human Factors Specialists from 10 Other Sources

Network Nominees

 

Peers

Supervisors

Total

Sample:

1,027

1,034

612

383

3,056

Ineligible

302

477

273

178

1,230

Eligibility unknown

73

354

223

98

748

Eligible:

652

203

116

107

1,078

 

(100.0%)

(100.0%)

(100.0%)

(100.0%)

 

Interviews

614

170

103

84

971

 

(94.2%)

(83.7%)

(88.8%)

(78.5%)

(90.0%)

Refusals

38

33

13

23

107

 

(5.8%)

(16.3%)

(11.2%)

(21.5%)

(10.0%)

Suggested Citation:"2 Survey Methodology." National Research Council. 1992. Human Factors Specialists'Education and Utilization: Results of a Survey. Washington, DC: The National Academies Press. doi: 10.17226/1978.
×

drawback to such an approach is that persons who were actually doing human factors work but who did not regard themselves as human factors specialists (for example, engineering psychologists) would eliminate themselves from the sample. Because an important focus of the project was to determine whether human factors work was being done by nonspecialists, the self-identification method was considered inappropriate. Sample eligibility was therefore based solely on actual occupational tasks currently performed, with self-identification with a profession to be determined subsequent to selection for the sample.

Given these considerations, persons in the initial samples were contacted by telephone by trained interviewers from the Survey Research Laboratory and asked two screening questions:

  1. In your current position, are you primarily concerned with human factors—that is, human capabilities and limitations related to the design of operations, systems, or devices?

  2. Do you supervise any people who perform human factors activities?

People who answered no to both questions were classified as ineligible and were not interviewed. This screening procedure eliminated all those on the membership lists who might regard themselves as human factors specialists but actually did not do any human factors work in their jobs. Also excluded were academic professionals who teach human factors principles to students but who do not perform any other work in the field, such as consulting. This was considered appropriate because the educators' activities were covered by the university program survey. More important, the screening criteria also eliminated large numbers of people who did not do human factors work and probably did not think of themselves as human factors specialists.

As Table 2.1 shows, of the 1,027 Human Factors Society members sampled, 302 were ineligible because they did not meet the screening criteria, and the eligibility of 73 was unknown. This left 652 members eligible for interviewing. If viewed with respect to the membership at large, 68 percent would have qualified for interview. Of the 1,034 human factors specialists from the 10 other societies, 477 did not meet the screening criteria and were therefore ineligible, and the eligibility of 354 could not be determined. This left 203 persons eligible for interviewing: 30 percent of the human factors specialists from other societies whose eligibility was known. The lower percentage of eligible people among the members of the other societies had been anticipated and explains why a heavier sampling rate from the membership list of the Human Factors Society was used initially. It should be noted that, in this report, estimates are weighted to account for this differential sampling rate to eliminate bias.

Suggested Citation:"2 Survey Methodology." National Research Council. 1992. Human Factors Specialists'Education and Utilization: Results of a Survey. Washington, DC: The National Academies Press. doi: 10.17226/1978.
×

In addition to the eligibles obtained from society membership lists, an additional 116 were obtained from network nominations by society members of peers who were not members of any society. An additional 107 supervisors of human factors personnel were also identified as eligible for interviewing. (This network nominations process is explained in greater detail later in this chapter.)

The number of individuals who refused to be interviewed once contacted was low, averaging 10 percent across the society member, peer, and supervisor groups. The refusal rate of 5.8 percent for members of the Human Factors Society was lower than that found in most surveys, suggesting a strong degree of interest by members in the survey as it was explained to them (see Table 2.1).

Making contact with potential respondents was a major problem faced in conducting the CATI survey. Many individuals had to be called more than 10 times before they could be located and screened and, as the table shows (''Eligibility unknown''), some could never be located or screened at all.

One factor that contributed to the problem of locating potential respondents was the vintage of society membership lists. While the Human Factors Society list was current, some of the other lists used were several years old, and some sampled respondents had moved and could not be located. For purposes of making estimates of the universe size and overall cooperation, we assume that the eligibility rate for those who could not be located is the same as for those who were located. Characteristics of the sample, including people who were not located, are shown in Tables 2.2, 2.3, and 2.4. Because of budget limitations, fewer efforts were made to locate those on the lists of organizations other than the Human Factors Society, since it had already been established that only a minority Would be eligible.

The panel wanted to include people who were not members of any professional societies in the CATI survey. As was mentioned earlier, this was accomplished by asking persons from the list sample to name their supervisor and other human factors specialists with whom they interact.

Table 2.2 Eligibility Rates of the Sample

 

Human Factors Society

Other Associations

Network Nominees

 

N

%

N

%

N

%

Initial sample

1,027

 

1,034

 

995

 

Located eligible

652

68.3

203

29.9

223

33.1

Located ineligible

302

31.7

477

70.1

451

66.9

Total

954

100.0

680

100.0

674

100.0

Suggested Citation:"2 Survey Methodology." National Research Council. 1992. Human Factors Specialists'Education and Utilization: Results of a Survey. Washington, DC: The National Academies Press. doi: 10.17226/1978.
×

TABLE 2.3 Estimated Eligibility Rates for Interview Candidates Who Could Not be Located

 

Human Factors Society

Other Associations

Network Nominees

 

N

%

N

%

N

%

Estimated eligible

50

(68.3)

106

(29.9)

106

(33.1)

Estimated ineligible

23

(31.7)

248

(70.1)

215

(66.9)

Total

73

 

354

 

321

 

This technique is sometimes called network sampling. As Table 2.1 shows, a total of nearly 1,000 nominations was obtained: 612 specialists and 383 supervisors. These nominations were first checked against the society membership lists; those not on the lists were then screened for eligibility. As with the list samples, not all were eligible or could be located.

Network sampling requires that nominated persons be weighted by the inverse of the network size of the nominator. This was done so that these cases could be added to the list sample cases for analytic purposes. A more complex weight, which takes account of the respondents' network sizes, was used for people selected by network nomination. The probability that a person was nominated depends on the number of other specialists he or she knows. If someone is not known by anyone else (an isolate), they will never be nominated. If someone is known to many people, the chances are higher that one or more of these people will nominate the person. For supervisors, the probability of nomination depends on how many human

TABLE 2.4 Estimated Cooperation Rates Including Those Who Could Not Be Located, Based on Total Sample Data

 

Human Factors Society

Other Associations

Network Nominees

 

N

%

N

%

N

%

Completed

614

87.5

170

55.0

187

56.9

Refused

38

5.4

33

10.7

36

10.9

Not located

50

7.1

106

34.3

106

32.2

Total estimated eligible

702

100.0

309

100.0

329

100.0

Suggested Citation:"2 Survey Methodology." National Research Council. 1992. Human Factors Specialists'Education and Utilization: Results of a Survey. Washington, DC: The National Academies Press. doi: 10.17226/1978.
×

factors specialists he or she supervises. Thus, for an estimate to be unbiased, the weight assigned to the nominator was divided by the network size.

Interviewing

Except for the problem of locating respondents, computer-assisted telephone interviewing was accomplished with minimal difficulty. The Survey Research Laboratory at the University of Illinois used a group of 20 experienced telephone interviewers and five supervisors. Interviewers were briefed on the purposes of the survey, the meaning of such terms as human factors; they spent at least one day of practice interviews before beginning actual cases. For reference use, each interviewer was provided with detailed printed instructions for each question.

During the interviewing, a supervisor was on duty at all times to answer questions. The supervisor also monitored interviewer performance on a random basis. During most of the interviewing, there was one supervisor monitoring three to eight interviewers, the average being around five. Interviewers reported that respondents were very cooperative and had little difficulty in responding to the questions presented to them.

Advance announcements in the Bulletin of the Human Factors Society and letters to the other cooperating societies were prepared to explain the purposes of the survey. Before the interview, an initial letter from the National Research Council was sent to each person in the sample along with the list of human factors job activities and a list of topics covered in specialized human factors training; these materials made the actual interview proceed smoothly. Virtually all respondents had examined the materials and had them available at the time scheduled in advance for the interview.

The CATI survey was scheduled for the period of April-June 1989, prior to summer vacation season. The interviewing actually stretched out an additional month, as the Survey Research Laboratory made final efforts to locate respondents who were away from their offices on long-term assignments or vacation.

THE MAIL-IN QUESTIONNAIRE

The purpose of the mail-in questionnaire was to obtain information about university graduate programs in engineering, psychology, and other departments that offer specialized education in human factors. Questionnaires were mailed to the directors of all programs in the United States and Canada that were listed in the 1988 edition of the Directory of Human Factors Graduate Programs in the United States and Canada published by the Human Factors Society, the largest professional association of human factors specialists in North America.

Suggested Citation:"2 Survey Methodology." National Research Council. 1992. Human Factors Specialists'Education and Utilization: Results of a Survey. Washington, DC: The National Academies Press. doi: 10.17226/1978.
×

Questionnaire Development

The questions used in the mail-in survey were developed by the panel and took as points of departure: (1) questions that had been used by Sanders in earlier surveys of the membership of the Human Factors Society (Sanders, Bied, and Curran, 1986); (2) information presented in the Directory of Human Factors Graduate Programs; and (3) additional items that the panel judged were relevant. The final mail-in questionnaire is shown in Appendix B.

Sampling

The universe for this survey was the 65 programs described in the program directory. All program directors were contacted by mail with followup by mail and telephone by staff of the Survey Research Center of the University of Illinois. Additional follow-up calls were made by panel members to those programs that had not responded by the stated deadline. Survey data collection began in spring 1989 and continued until the late fall.

Cooperation Rate

In North America, 58 universities offer 65 graduate education programs with a specialization in human factors. Some universities offer programs in more than one department. Of these, 59 programs are in the United States; 6 are in Canada. Of the U.S. programs, 48 responded, a cooperation rate of just over 81 percent. The failure of all but one Canadian program to respond lowered the combined cooperation rate for the United States and Canada to 75.4 percent. These cooperation rates were somewhat lower than had been expected and may be attributed to the complexity and length of the questionnaire and the amount of detail that was requested.

There is no reason to believe that sample biases's had an impact on the overall findings from the program survey. For example, the response rate from small programs was not appreciably different from that of larger programs.

Quality of Data

The data received from program directors was generally of high quality. Unfortunately, some questionnaire items were not completed. The most serious problem of missing data was that some respondents from institutions with both master's and doctoral programs reported on one but not both programs. For further details on the actual sample sizes for each question in the survey, see Chapter 4.

Suggested Citation:"2 Survey Methodology." National Research Council. 1992. Human Factors Specialists'Education and Utilization: Results of a Survey. Washington, DC: The National Academies Press. doi: 10.17226/1978.
×
Page 16
Suggested Citation:"2 Survey Methodology." National Research Council. 1992. Human Factors Specialists'Education and Utilization: Results of a Survey. Washington, DC: The National Academies Press. doi: 10.17226/1978.
×
Page 17
Suggested Citation:"2 Survey Methodology." National Research Council. 1992. Human Factors Specialists'Education and Utilization: Results of a Survey. Washington, DC: The National Academies Press. doi: 10.17226/1978.
×
Page 18
Suggested Citation:"2 Survey Methodology." National Research Council. 1992. Human Factors Specialists'Education and Utilization: Results of a Survey. Washington, DC: The National Academies Press. doi: 10.17226/1978.
×
Page 19
Suggested Citation:"2 Survey Methodology." National Research Council. 1992. Human Factors Specialists'Education and Utilization: Results of a Survey. Washington, DC: The National Academies Press. doi: 10.17226/1978.
×
Page 20
Suggested Citation:"2 Survey Methodology." National Research Council. 1992. Human Factors Specialists'Education and Utilization: Results of a Survey. Washington, DC: The National Academies Press. doi: 10.17226/1978.
×
Page 21
Suggested Citation:"2 Survey Methodology." National Research Council. 1992. Human Factors Specialists'Education and Utilization: Results of a Survey. Washington, DC: The National Academies Press. doi: 10.17226/1978.
×
Page 22
Suggested Citation:"2 Survey Methodology." National Research Council. 1992. Human Factors Specialists'Education and Utilization: Results of a Survey. Washington, DC: The National Academies Press. doi: 10.17226/1978.
×
Page 23
Suggested Citation:"2 Survey Methodology." National Research Council. 1992. Human Factors Specialists'Education and Utilization: Results of a Survey. Washington, DC: The National Academies Press. doi: 10.17226/1978.
×
Page 24
Next: 3 Characteristics and Utilization of Human Factors Specialists »
Human Factors Specialists'Education and Utilization: Results of a Survey Get This Book
×
 Human Factors Specialists'Education and Utilization: Results of a Survey
Buy Paperback | $50.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Does the education given by the nation's human factors graduate training programs meet the skill and knowledge needs of today's employers? Can the supply of trained human factors specialists be expected to keep pace with the demand? What are the characteristics, employment settings, gender distribution, and salaries of human factors specialists?

These and other questions were posed by the committee as it designed mail-in and computer-aided telephone surveys used to query human factors specialists. The committee evaluates its findings and makes recommendations aimed at strengthening the profession of human factors.

This book will be useful to educators as an aid in evaluating their graduate training curricula, employers in working with graduate programs and enhancing staff opportunities for continuing education, and professionals in assessing their status in relation to their colleagues.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!