Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 23
An Assessment of NASA’s National Aviation Operations Monitoring Service 5 Analysis of NAOMS Questionnaires The charge to the committee included an assessment of the design of the questionnaires used in the NAOMS survey. The structure of the two questionnaires used—one for AC pilots and one for GA pilots, as described in Chapter 4—was the same, with different questions as appropriate. Section 5.1 reviews the questionnaire structure, and Section 5.2 provides the committee’s analysis of the questions. The complete AC and GA questionnaires can be found, respectively, in Appendixes G and H of this report. 5.1 QUESTIONNAIRE STRUCTURE The NAOMS survey comprised four sections: Section A—Flight Activity Levels and Background—Section A included background questions on pilot and aircraft exposure information (number of legs and hours flown). Specifically, the data collected included number of flight hours, number of flight legs, aircraft size, propulsion type, flight type (domestic, international, etc.), crew role (captain, first officer, etc.), amount of pilot experience (in years), and mission type (passenger, cargo, etc.). The primary questions were the number of flight legs and flight hours flown by the respondent (pilot) during the recall period. This information provided the “exposure” variable (legs or hours) to be used as the denominators in the rate calculations. Section B—Safety Related Events—Section B asked the pilots about the number of events that occurred for a wide range of event types. These questions were designed to be asked routinely over a long period of time to enable the computation of safety event rates and event rate trends. To select the topics in Section B, the NAOMS team first consulted “existing aviation safety data repositories maintained by NASA, the FAA, and the NTSB to identify known safety issues.” The team also asked pilots and others in the aviation field about “safety issues important to them based on their first-hand operating experience.”1 This collection of information occurred through consultation with the ASRS analysts, AC pilot focus-group sessions, and two workshops hosted by the NAOMS team. The focus-group sessions were held in the Washington, D.C., area in August and September of 1998. Battelle’s final report states that the sessions included 37 active air carrier pilots flying both domestic and international routes, 1 Battelle, NAOMS Reference Report, 2007, pp. 18-19.
OCR for page 24
An Assessment of NASA’s National Aviation Operations Monitoring Service but it does not indicate how these 37 were selected. Each session lasted 90 minutes, was led by a professional facilitator, and involved between 2 and 15 pilots. Pilots were encouraged to mention as many different types of events as possible, including any events that should not occur during normal operations. All sessions were recorded and later transcribed. The specific questions posed to the groups are in Appendix 6 of Battelle’s final report.2 The NAOMS team also conducted nine one-on-one interviews to identify additional events that did not surface in the focus groups. The consolidated list of safety-related topics, as generated by the focus-group sessions and the interviews, is in Appendix 7 of Battelle’s final report. Decisions on which topics to include in Section B of the NAOMS survey were based on “a desire to select events serious enough to be good indicators of the safety performance of the aviation system, yet not so serious that they would occur too rarely to be captured in the survey.”3 Some rare events were included in Section B because of strong industry interest in these specific topics. The NAOMS team structured the organization of questions in Section B based on the team’s research on how pilots organize their memories. The advice of “accomplished survey methodologists and aviation subject matter experts” was used “to craft questions responsive to each topic.”4 Section C—Special Topics—The questions in Section C on special focus topics were intended to be asked only over a few months or years and then replaced by new topics. Three different Section C question sets were developed for the AC questionnaire: one concerning minimum equipment lists, a second one addressing in-close approach changes, and the final one requested by a Commercial Aviation Safety Team subgroup of the Joint Implementation Measurement Data Analysis Team (CAST-JIMDAT) focusing on “the development of baseline aviation system performance measures.”5 Section D—Questionnaire Feedback—The questions in Section D provided respondents with a chance to give feedback regarding their survey experience to the interviewers who called. While not directly applicable for safety event rates or trends, some answers to these questions have provided possible topics for future surveys, should they occur. 5.2 ANALYSIS OF THE QUESTIONS On the basis of its assessment of the AC and GA questionnaires, the committee found four types of problems that reduced the usefulness of the data collected in the NAOMS survey: The questionnaires were designed so that events and experiences from markedly different segments of the aviation industry were aggregated together (and cannot be disaggregated). Some of the questions asked pilots for information that they would likely not have had without a post-flight analysis. Some of the questions had vague or ambiguous definitions of what constituted an event to be measured. Some of the questions did not have a clear link between the measured event and aviation safety. These problems are discussed in detail in the following subsections. (While the examples shown below come primarily from the AC questionnaire, the general problems discussed exist in both the AC and GA questionnaires, unless otherwise specified.) 2 Ibid., p. 19. 3 Ibid. 4 Ibid., p. 20. 5 Ibid. 6 Ibid.
OCR for page 25
An Assessment of NASA’s National Aviation Operations Monitoring Service 5.2.1 Aggregation of Markedly Different Segments of Aviation As discussed in Chapter 4, there are two issues with the sampling frame. One is whether the appropriate pilots were sampled; the other is whether the flight legs for which the pilots provided information were confined to those in the operations of interest. For example, in the AC survey, once pilots who were qualified to conduct Part 121 operations were selected, the survey should have restricted the flight legs, flight hours, and events reported to Part 121 operations and excluded flights and events that occurred in other operations. Unfortunately, this was not the case, as the survey did not ask for information about whether the pilots had actually flown in Part 121 operations, or whether they had flown in any other operations besides Part 121 during the recall period. The AC field test survey did include a question (A3) asking the pilots how many hours and flight legs they flew in “Scheduled Major or National,” “Scheduled Regional,” “Unscheduled,” and “Cargo” operations,7 as well as the make and model of the aircraft that they flew in each of these four categories. However, the field test questions asked the pilot to report only those safety-related events that occurred in “commercial aircraft,” so the questionnaire failed to link the safety-related event to one of the four categories of service or to the make or model of aircraft that was being flown when the event occurred. The final AC questionnaire did not include any question or reference to the four types of operations contained in the field survey or to other types of operations in which the pilots might have flown. Rather, the only reference was to the hours and legs flown as a “crewmember on a commercial aircraft” and to the aircraft makes and models that the pilot “flew commercially” during the recall period. Several problems emerge from these aspects of the questionnaire. Even the term air carrier is very broad and includes not only the well-known scheduled passenger airlines such as American, Delta, and Southwest, but also the large air cargo airlines such as FedEx and UPS. These operations were lumped together in the NAOMS survey, but the distinction between passenger and cargo airlines is potentially important. As discussed in Chapter 2, the principal concern with aviation safety has been that of reducing fatalities, so the crash of a cargo plane has less potential for loss of life than that of a passenger plane. Also, these two segments of aviation may fly similar aircraft, but they fly in different operating environments and thus may well experience different rates of some of the safety-related events included in the NAOMS survey. Yet, the survey provided no way of distinguishing which operations or which safety-related events occurred in passenger versus cargo operations. The term air carrier, as used in Federal Aviation Regulations (FAR; 14 CFR 1.1), also includes nonscheduled air taxi operators providing service in some very small aircraft in some very challenging operating environments (such as Alaska) as well as charter-flight operators and small, nonscheduled cargo aircraft operators. Many of these operations are not conducted under Part 121 rules and procedures, but rather under other, typically less strict, regulations, often under Part 135 and in very different operating environments. Thus the term air carrier includes an extraordinarily heterogeneous collection of operators, aircraft, and operating environments (see Appendix D for more information on the relevant parts of FAR). The survey’s use of the term commercial is even more problematic. In aviation, commercial is a broad term that could be interpreted to include an even wider variety of operations than would be included under the term air carrier. In addition to operations and aircraft flown under Part 121, commercial operations also include operations and aircraft flown under both Part 135 and even some operations, such as corporate aircraft, under Part 91.8 It is possible that the pilots may have interpreted commercial operations to include Part 136 (commercial air tours), Part 137 (agricultural operations), Part 141 (pilot schools), and perhaps other segments of aviation as well. The aircraft typically flown under Parts 135 and 91, as well as under Parts 136, 137, and 141, are smaller and often not as well equipped as those flown under Part 121. Moreover, the types of flights in these operations and the environments in which they occur are usually quite different from Part 121 air carrier operations (see Appendix D for more information on the relevant parts of FAR). Based on an analysis of the available survey data, the committee determined that about 75 percent of the pilots in the AC survey reported flying only one make and model of aircraft during the recall period. As might 7 Joan Cwi, Director, Survey Operations, Battelle Memorial Institute, The NAOMS Field Trial, presentation to the NAOMS Working Group Meeting, Battelle Memorial Institute, Columbus, Ohio, December 18, 2003. 8 Code of Federal Regulations, Title 14: Aeronautics and Space, December 2005, available at http://ecfr.gpoaccess.gov/cgi/t/text/text-idx?c=ecfr&rgn=div8&view=text&node=14:22.214.171.124.126.96.36.199&idno=14, accessed July 20, 2009.
OCR for page 26
An Assessment of NASA’s National Aviation Operations Monitoring Service be expected, given the sampling frame for the pilots, the most frequently reported types of aircraft were those in use by the major scheduled passenger airlines—such as B737s, B747s, B777s, B757s, and A320s. Some pilots reported flying DC-10s, MD-11s, B727s, and DC-8s, aircraft that are not used in scheduled passenger service but that are or were at the time used frequently in scheduled cargo service by FedEx and UPS. However, several pilots in the AC survey reported flying all of their time in aircraft such as B707s, Gulfstream, and Learjet aircraft, yet these aircraft are not reported in the Bureau of Transportation Statistics data to have been used by air carriers operating under Part 121 during the period covered by the survey.9 For example, both Gulfstream and Learjet are used extensively as corporate aircraft flown under Part 91. The inclusion of these aircraft in the survey appears to indicate that pilots interpreted the term commercial more broadly than the FAA definition of the term air carrier and certainly to include more than Part 121 operations. Since these pilots who flew only one of these aircraft during the recall period (B707, Gulfstream, and Learjet) assumed this broad interpretation of commercial, it is likely that other pilots who flew more than one aircraft type during the recall period used a similarly broad definition of commercial in their responses. There are three troubling aspects about the survey’s failure to distinguish between safety-related events occurring while flying different aircraft, under different regulatory regimes, and in different operating environments: The accident rates have varied considerably across these different segments of aviation, so one might expect the frequency of safety-related events also to vary across these segments. For example, based on NTSB accident data for 1989 through 2008, the rate of fatal accidents per 100,000 flight hours was more than 33 times greater for Part 135 operations than for Part 121 scheduled airline operations. During the same period, the fatal accident rate for general aviation was more than 91 times greater than for the scheduled airlines.10 It is possible that similar differences also exist in rates of safety-related events. The growth rates of these industry segments have been different, so over time, the mix of pilots in the sample who operate in these different segments is likely to change. For example, during the 1989 through 2008 period, Part 121 scheduled airline flight hours increased 77 percent, while Part 135 flight hours decreased 25 percent and general aviation11 flight hours declined 21 percent.12 Thus the inability to link the safety-related event either to the aircraft type or to the type of operating environment would seem to severely hinder, or more likely prevent, any meaningful analysis of event rates by aircraft type or type of operation. Moreover, because the mix of operations included in the NAOMS aggregate rates is likely to change over time, trends in the NAOMS aggregate rates would not necessarily reflect trends in the occurrence of these events in the airspace system. Instead, they might reflect a change in the mix of pilots flying in markedly different operating environments. Finally, these limitations severely hinder or more likely prevent any meaningful comparison of event rates or trends in those rates calculated from the NAOMS data with event rates derived from other sources of data such as those compiled by the FAA. The same basic problem was found in the GA questionnaire. The GA questionnaire did ask pilots what proportion of their flight hours and flight legs were done under Part 121, Part 135, and Part 91 and what aircraft types they flew in each of these segments. Then, the questionnaire asked respondents to report only safety-related events for those flights that occurred under either Part 135 or Part 91. But, as in the AC survey, the GA survey failed to link the events either to the type of operation or to the type of aircraft, so the GA survey aggregated events and 9 Bureau of Transportation Statistics, Database: Air Carrier Statistics (Form 41 Traffic)—U.S. Carriers, Research and Innovative Technology Administration (RITA), available at http://www.transtats.bts.gov/Tables.asp?DB_ID=110&DB_Name=Air%20Carrier%20Statistics%20%28Form%2041%20Traffic%29-%20%20U.S.%20Carriers&DB_Short_Name=Air%20Carriers, accessed June 11, 2009. 10 These fatality rates were from committee calculations using data found in NTSB, Aviation Accident Statistics, NTSB, Washington, D.C., 2009, Tables 6, 8, 9, and 10, available at http://www.ntsb.gov/aviation/stats.htm, accessed July 15, 2009. 11 The Department of Transportation considers general aviation to be operations of U.S.-registered civil aircraft not operated under FAR Part 121 or Part 135. 12 These growth rates were from committee calculations using data found in NTSB, Aviation Accident Statistics, 2009, Tables 6, 8, 9, and 10.
OCR for page 27
An Assessment of NASA’s National Aviation Operations Monitoring Service legs from both Part 91 and Part 135, and these have been seen to be very different operating environments. Thus, the same three concerns that were raised for the AC questionnaire are also relevant for the GA questionnaire. Finding: Both the air carrier and the general aviation questionnaires asked respondents to include events, flight hours, and flight legs in segments of aviation that went beyond even the broadest definition of AC operations and beyond the conventional definition of GA. As a result, highly disparate segments of the aviation industry were aggregated into the safety-related event rates that were calculated from these surveys. Finding: The inability to link the safety-related event to the aircraft type or to the operating environment in which the event occurred severely hinders any meaningful analysis of event rates or trends in event rates by aircraft type or by segment of aviation. 5.2.2 Asking Pilots for Information That They Would Not Have Had Without Post-Flight Analysis Some of the questions in both the AC and GA questionnaires asked the pilots about causes of events. As will be discussed below, the pilot might well perceive that an event occurred or that there was a specific cause for the event based on information available in the cockpit at the time. However, in many situations, only a post-flight analysis of the flight data recorder or of the aircraft itself would reveal what the event actually was or what had caused the event. In air carrier operations, pilots would not typically have access to that information. Thus, many pilots would be responding to the survey on the basis of their perception of what had occurred rather than on the basis of what the post-flight analysis showed to actually have occurred. This is particularly problematic if the data from these types of questions are then compared as actual events with aviation safety data from other sources. For example, the pilot might see indications in the cockpit consistent with an engine failure and thus perceive that there had in fact been an engine failure, but the actual failure could have been something else. An accessory or component failure that reduced thrust or revolutions per minute could cause the pilot to perceive an engine failure, whereas analysis might show that the engine itself did not fail. Similarly, a Full Authority Digital Engine Control failure could shut down the engine, resulting in what appeared to be an engine failure to the pilot. A post-flight analysis of the flight data recorder or the aircraft would reveal what had actually happened, but few pilots would have access to this information. The pilots would therefore be answering the question with a broader definition of engine failure that would include more kinds of events than the definition used by the FAA and by other sources of data. Comparing the data as if the terms were the same would be misleading and would almost certainly result in NAOMS reporting higher rates of engine failure than the other sources would report. Such comparisons are inevitable. Indeed, the NAOMS team’s preliminary analysis of the data and the presentation based on that analysis made this comparison13 and drew the inference, later reported by the media,14 that the FAA was under-reporting this event. If the NAOMS survey was going to use the same terms as those used by other established data sources, the committee believes that it should have used the same definitions. Otherwise, it should have recognized the difference in definitions by using different terminology. Pilot perceptions of safety-related events can provide valuable information. Indeed, some surveys are designed to collect data on the respondents’ perceptions or opinions of events. However, NAOMS was not conceived or justified on that basis. Rather, its stated intent was to provide information about the rates of specific events that are related to safety. The NAOMS survey was justified in the expectation that it would be a new tool that had been missing within the aviation safety field—a tool that could generate statistically valid rates of events and track trends over time for the entire NAS. NASA’s intent was to offer policy makers statistically valid estimates that would address the performance and safety of the entire NAS and would measure the impacts of various new policies and programs.15 13 “Comparison Charts,” presentation to the NAOMS Working Group Meeting, Washington, D.C., May 5, 2004, p. 2. 14 Alexandra Marks, “NASA plays down its air safety report,” Christian Science Monitor, January 3, 2008. 15 Irving C. Statler, ed., The Aviation System Monitoring and Modeling (ASMM) Project: A Documentation of Its History and Accomplishments: 1999-2005, NASA, Washington, D.C., June 2007, available at http://www.nasa.gov/pdf/225024main_TP-2007-214556%20ASMM_Project.pdf, accessed June 11, 2008, p. 17.
OCR for page 28
An Assessment of NASA’s National Aviation Operations Monitoring Service Following are examples of NAOMS survey questions with the problem discussed above. Question ER5 in the AC questionnaire reads: How many times during the last (TIME PERIOD) did an inflight aircraft on which you were a crewmember experience smoke, fire, or fumes that originated in any of the following areas: the engine or nacelle? the flight deck? the cargo hold? the galley? elsewhere in the passenger compartment? During the last (TIME PERIOD), how many times did an inflight aircraft on which you were a crewmember experience smoke, fire or fumes that originated other than in the engine or nacelle, flight deck, cargo hold, galley, or passenger compartment? Where did the smoke, fire or fumes originate? SPECIFY. This question cannot be answered accurately without analyzing post-flight data because pilots may or may not know or be able to tell the difference between bleed air fumes (oil-based), electrical fumes, or solid object fumes. For example, smoke or fumes detected by the pilot anywhere in the aircraft could have originated in the engine and spread throughout the aircraft as a function of pressurization bleed air extraction. In some situations, it is also possible that the perceived smoke or fumes could have come from outside the aircraft, particularly when the aircraft is on a taxiway adjacent to an active runway awaiting its turn to take off. Other questions have similar potential problems. One question asks about uncommanded movements of control surfaces, but the pilot would not necessarily know what failure resulted in what appeared to be an uncommanded movement. Another question asks for how many degrees an aircraft rolled in a wake turbulence encounter, but without a post-flight analysis of the flight data recorder, a pilot would not know how much the aircraft had rolled. Another question asks for airspeed deviation during a wind shear event. During such sudden and unexpected encounters, pilots are typically more concerned with recovering the aircraft than with estimating the degree of roll or airspeed deviation. Other questions ask whether the aircraft came within 500 feet of another aircraft. Again, in such unexpected situations, the pilot is typically neither in a position nor trained to make an accurate estimate of the absolute distance. Still another question asks whether hazardous materials were packaged and loaded on the aircraft in compliance with the appropriate regulations, but those are not regulations that a pilot is required or even expected to be familiar with. Appendix F contains nine additional examples of questions that potentially ask pilots for information that they would not typically have access to. Finding: Some of the questions in the NAOMS survey would not provide accurate and consistent measures of events because they asked about situations in which pilots would not typically have access to the information needed for an accurate response. 5.2.3 Problems with Structure and Wording of the Questions The literature on the design of survey questionnaires emphasizes the importance of clear and carefully worded questions to elicit reliable responses. In particular, it is important that respondents answer a question consistently even if the question is asked at different points in the survey and that, ideally, different respondents interpret the same question in the same way. This is especially important in a survey such as NAOMS that is intended to collect information on events that occurred, rather than to collect opinions and perceptions of respondents. The NAOMS team field-tested the survey at several stages and apparently redesigned the questions to take into account some of the comments that it received. Nevertheless, the committee finds that several questions in the survey contain wording that pilots may have found difficult to interpret precisely and to answer consistently. These include (1) long questions with complex structure that would be difficult to understand in a computer-assisted telephone interview; (2) questions that appear to combine multiple, unrelated events; (3) questions about events
OCR for page 29
An Assessment of NASA’s National Aviation Operations Monitoring Service that are not well defined; and (4) questions containing vague terms. As is discussed below, these problems may have resulted in inconsistent responses and led to additional measurement error in some of the responses. 188.8.131.52 Complex Structure The following is an example from the NAOMS survey of a long question with complex structure: AC2. During the last (TIME PERIOD), how many times did an aircraft on which you were a crewmember perform an evasive action to avoid an imminent in-flight collision with another aircraft that was never closer than 500 feet including evasive action in response to a TCAS advisory? This question includes several conditions—time period, evasive action, imminent collision, 500 feet—that the respondent must keep in mind while deciding on an answer. Doing so is particularly difficult in a telephone interview. It is not easy to digest such questions even if the interviewer repeats the question, which would only be done at the respondent’s request. The literature on questionnaire design clearly recommends against using such questions.16 184.108.40.206 Combining Multiple Events or Causes Several of the NAOMS survey questions had two or more subparts, and it would be unclear to the respondent which part to answer. This is sometimes referred to in the literature as a double-barreled question.17 Two examples are given below. AT2. How many times during the last (TIME PERIOD) did an aircraft on which you were a crewmember fly at an undesirably high altitude or airspeed on approach due to an A.T.C. [Air Traffic Control] clearance? This question combines two events: undesirably high altitude and undesirably high airspeed. It is unclear why one should be interested in the total number of the two events, as their causes and consequences are likely to be different. Such questions also create a problem for data analysis, as an answer of X in this example can mean X times high altitude, X times high airspeed, or some combination of those possibilities. AH2. During the last (TIME PERIOD), how many times did an aircraft on which you were a crewmember accept an A.T.C. clearance that the aircraft could not comply with because of its performance limits? This is another example of a multipart question. The answer can refer to two very different situations from an aviation safety perspective. In the first case, a pilot may have accepted the clearance and subsequently determined the inability to comply. In the second, the pilot may have accepted the clearance knowing in advance that the aircraft could not, or reasonably might not, be able to comply with the clearance. From the standpoint of trying to reduce such potentially unsafe events, it is critical to distinguish between these two different situations. 220.127.116.11 Unclear Definition of Events Some questions in the NAOMS survey asked the respondents about events that were not clearly defined. For example: AH9. During the last (TIME PERIOD), how many times did an aircraft on which you were a crewmember experience a hard landing? There was no definition of what constitutes a hard landing and there was no consensus on the meaning of the term even among the pilots within the committee. Several questions included the phrase near collision or nearly collide. The term near collision is difficult to quantify, and there is likely to be considerable variation among the respondents in interpreting it. It may have been 16 Bradburn et al., Asking Questions, 2004. 17 Ibid.
OCR for page 30
An Assessment of NASA’s National Aviation Operations Monitoring Service better to ask for the number of times that a near collision had led to some specific action on the part of the pilot, such as evasive action or the reporting of the event. 18.104.22.168 Use of Vague Terms There are several questions in the NAOMS survey with vague modifiers, such as abrupt, accurate, severe, and time-critical. Some examples are given below. TU1. During the last (TIME PERIOD), how many times did an aircraft on which you were a crewmember encounter severe turbulence that caused large abrupt changes in altitude, airspeed, or attitude? Different pilots would interpret the phrases severe turbulence and large abrupt changes in altitude, airspeed, or attitude differently. Severe turbulence has a formal definition in the FAA’s Aeronautical Information Manual: “Turbulence that causes large, abrupt changes in altitude and/or attitude. It usually causes large variations in indicated airspeed. Aircraft may be momentarily out of control.”18 However, this definition also uses vague modifiers, which would prevent consistency in the answers from the respondents. Moreover, severe is the third of four levels of turbulence defined in the manual—light, moderate, severe, and extreme—all of which use vague modifiers in their definitions. Survey respondents familiar with these four terms and definitions might well have found it ambiguous as to whether events that they perceived to fit the definition of extreme turbulence were to be included in their response to the preceding question. WE1. During the last (TIME PERIOD), how many times did an aircraft on which you were a crewmember lack accurate weather information when crewmembers needed it while airborne? The perception of the extent to which the weather information was accurate or sufficiently accurate for their needs is likely to vary among respondents. The problematic types of questions exemplified above must be carefully examined using cognitive testing techniques,19 and the committee did not see any evidence that this had been done. It is possible that some of these questions could not be worded more precisely, in which case they should not have been included in the survey. Appendix F provides additional examples of questions identified by the committee as having one or more of the deficiencies discussed in this section. Finding: There are several problems with the structure and the wording of the survey questions. These problems may have led to varying interpretations and judgments, thus reducing the value of some of the survey results. 5.2.4 Questions About Events Without a Clear Link to Aviation Safety Finally, in reviewing the questionnaire, the committee was concerned about the lack of relevance to aviation safety of the series of questions about in-close approach changes in Section C of the AC questionnaire. The question on which Section C is based is as follows: IC1. During the last (TIME PERIOD), how many times did an aircraft on which you were a crewmember receive an unrequested clearance change to runway assignment, altitude restrictions or airspeed within 10 miles of the runway threshold? The committee questioned the relevance of this section to aviation safety because of the 10-mile criterion that was established as the basis for the question. A 10-mile criterion for defining something as “safety related” seems ad hoc and inconsistent with other definitions of safety around the terminal area. By including these questions as 18 Federal Aviation Administration, Aeronautical Information Manual, Washington, D.C, February 14, 2008, Section 7.1.23 PIREPS Relating to Turbulence, Section 7.1.25 Clear Air Turbulence (CAT) PIREPS, and Table 7-1-9 Turbulence Reporting Criteria Table. 19 Robert M. Groves, Floyd J. Fowler, Jr., Mick P. Couper, James M. Lepkowski, Eleanor Singer, and Roger Tourangeau, Survey Methodology, Wiley, New York, 2004, p. 213.
OCR for page 31
An Assessment of NASA’s National Aviation Operations Monitoring Service part of “safety-related events,” some respondents might deduce that there is something inherently unsafe about an approach change inside of 10 miles if the crew did not request it. The response to this question and whether or not there was a potential safety concern could vary greatly, depending on where the change was initiated and on how much of a heading change would be required in the maneuver. Similarly, the question did not allow for the consideration of the type of aircraft involved, since smaller, narrow-body aircraft could more easily accept a change much closer to the runway than could a wide-body aircraft.