F Validation of Aircraft Flight Simulators

OVERVIEW

Computer-based simulations are used widely in commercial aviation to assist in airframe design, flight operations, and pilot training (Stix, 1991). Development of aircraft flight simulators is directly linked to development of specific aircraft. The extensive data generated as part of aircraft design and testing are used as a technical resource for developing a simulator for training pilots in that aircraft's operation (aircraft flight simulators are airframe-specific). Thus, aircraft flight simulators cannot be modified to permit training in multiple airframes nor are they used for designing air routes.

Validation of an aircraft flight training simulator's fidelity to represent an aircraft's performance and handling historically has been the task of the chief pilot for an airframe manufacturer, a pilot selected by the Federal Aviation Administration (FAA), or a military officer assigned the role of project pilot for the Department of Defense. Evaluations have been based on a subjective opinion of the pilot relative to how well the cockpit controllers (such as stick, throttles, rudder pedal, and brakes), cockpit instrumentation, aural system (used to generate engine sounds and wind noise), visual system, and motion system are designed, modeled, and integrated to recreate the true behavior of the aircraft for various flight mission phases (for example, ground handling, takeoff, and climb). Today, pilots still play a role in the simulation validation process. However, many quantitative tests have been designed and used for evaluating the correctness of the simulator.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 131
Shiphandling Simulation: Application to Waterway Design F Validation of Aircraft Flight Simulators OVERVIEW Computer-based simulations are used widely in commercial aviation to assist in airframe design, flight operations, and pilot training (Stix, 1991). Development of aircraft flight simulators is directly linked to development of specific aircraft. The extensive data generated as part of aircraft design and testing are used as a technical resource for developing a simulator for training pilots in that aircraft's operation (aircraft flight simulators are airframe-specific). Thus, aircraft flight simulators cannot be modified to permit training in multiple airframes nor are they used for designing air routes. Validation of an aircraft flight training simulator's fidelity to represent an aircraft's performance and handling historically has been the task of the chief pilot for an airframe manufacturer, a pilot selected by the Federal Aviation Administration (FAA), or a military officer assigned the role of project pilot for the Department of Defense. Evaluations have been based on a subjective opinion of the pilot relative to how well the cockpit controllers (such as stick, throttles, rudder pedal, and brakes), cockpit instrumentation, aural system (used to generate engine sounds and wind noise), visual system, and motion system are designed, modeled, and integrated to recreate the true behavior of the aircraft for various flight mission phases (for example, ground handling, takeoff, and climb). Today, pilots still play a role in the simulation validation process. However, many quantitative tests have been designed and used for evaluating the correctness of the simulator.

OCR for page 131
Shiphandling Simulation: Application to Waterway Design VALIDATION POLICY Strict guidelines are followed for the design and validation of military operational flight and weapon system trainers. The military software specification MIL-2167A defines the procedure by which simulator software is designed, documented, and validated. Simulators for aircraft under the jurisdiction of the FAA are validated under criteria specified by an FAA Advisory Circular. Currently, this is AC120-45A Draft: Airplane Flight Training Device Qualification. The FAA's interest in certificating an aircraft flight simulator can be traced to its philosophy concerning the role of flight simulators. This viewpoint is stated in the introduction to the circular, which follows. The primary objective of flight training is to provide a means for flight crewmembers to acquire the skills and knowledge necessary to perform to a desired safe standard. Flight simulation provides an effective, viable environment for the instruction, demonstration, and practice of the maneuvers and procedures (called training events) pertinent to a particular airplane and crewmember position. Successful completion of flight training is validated by appropriate testing, called checking events. The complexity, operating costs, and operating environment of modern airplanes, together with the technological advances made in flight simulation, have encouraged the expanded use of training devices and simulators in the training and checking of flight crewmembers. These devices provide more in-depth training than can be accomplished in the airplane and provide a very high transfer of skills, knowledge, and behavior to the cockpit. Additionally, their use results in safer flight training and cost reductions for the operators, while achieving fuel conservation, a decrease in noise and otherwise helping maintain environmental quality. The FAA has traditionally recognized the value of training devices and has awarded credit for their use in the completion of specific training and checking events in both general aviation and air carrier flight training programs and in pilot certification activities. Such credits are delineated in FAR Part 61 and Appendix A of that part; FAR Part 121, including Appendices E and F; and in other appropriate sources such as handbooks and guidance documents. These FAR sources, however, refer only to a ''training device,'' with no further descriptive information. Other sources refer to training devices in several categories such as Cockpit Procedures Trainers (CPT), Cockpit Systems Simulators (CSS), Fixed Base Simulators (FBS), and other descriptors. These categories and names have had no standard definition or design criteria within the industry and, consequently, have presented communications difficulties and inconsistent standardization in their application. Furthermore, no single source guidance document has existed to categorize these devices, to provide qualification standards for each category, or to relate one category to another in terms of capability or

OCR for page 131
Shiphandling Simulation: Application to Waterway Design technical complexity. As a result, approval of these devices for use in training programs has not always been equitable. The circular, under Evaluation Policy, addresses the scope of quantification testing that is required in order to certificate (validate) the operation of a simulator, as follows: The flight training device must be assessed in those areas which are essential to accomplishing responses and control checks, as well as performance in the takeoff, climb, cruise, descent, approach, and landing phases of flight. Crewmember station checks, instructor station functions checks, and certain additional requirements depending on the complexity of the device (i.e., touch activated, cathode ray tube instructor controls; automatic lesson plan operation; selected mode of operation for "fly-by-wire" airplanes; etc.) must be thoroughly assessed. Should a motion system or visual system be contemplated for installation on any level of flight training device, the operator or the manufacturer should contact the NSPM for information regarding an acceptable method for measuring motion and/or visual system operation and application tolerances. The motion and visual systems, if installed, will be evaluated to ensure their proper operation. The intent is to evaluate flight training devices as objectively as possible. Pilot acceptance, however, is also an important consideration. Therefore, the device will be subjected to the validation tests listed in Appendix 2 of this Advisory Circular and the functions and subjective tests from Appendix 3. These include a qualitative assessment by an FAA pilot who is qualified in the respective airplane, or set of airplanes in the case of Level 2 or 3. Validation tests are used to compare objectively flight training device data and airplane data (or other approved reference data) to assure that they agree within a specified tolerance. Functions tests provide a basis for evaluating flight training device capability to perform over a typical training period and to verify correct operation of the controls, instruments, and systems. QUANTITATIVE TEST PROCEDURES Systematic procedures have been and are continuing to be developed to aid the validation of all components that comprise a modern aircraft simulator. Parameter identification is now being used routinely to extract aerodynamic models from flight test measurements. The parameter identification results are used to validate the simulation mathematical model (Anderson et al., 1983; Anderson and Vincent, 1983; Hess and Hildreth, 1990; Trankle et al., 1981; Trankle et al., n.d.). Validation of an aircraft flight simulation model typically involves four levels of testing (actual procedures vary by facility). At the first level, individual modules or sub-programs (down to the smallest practical subdivision) are tested individually. This insures that each module has been

OCR for page 131
Shiphandling Simulation: Application to Waterway Design coded correctly (that is, it satisfies the design requirements for that module). The second level involves testing of small program packages or groups of sub-programs that are related in functionality (such as those modules that comprise the propulsion system). These are tested as separate packages to further debug them and to test the input/output interaction between each module. Model validation also begins in its simplest form. Test drivers are used in both the first and second levels. For example, a test driver has been developed for static testing of subroutines by allowing control of the input and output variables to each group of program packages. Inputs are generated to stimulate each individual program package in a controlled manner so that the resulting outputs can be examined, usually graphically. For example, in aero models, the angle-of-attack is varied at-180° to +180° for given Mach numbers. The coefficients that comprise the aero model are then plotted as a function of angle-of-attack to assure that their value is correct and continuous. Testing at the second level completes the static testing of the math model. The complete math model is tested for dynamic response verification at level three. Two test tools are used for analysis. An open loop test generator generates step, sine wave, or ramp or doublet inputs to the simulation. These are used to assess dynamic responses. Since the inputs are computer generated, they can be reproduced exactly and can be used to produce easily analyzed inputs. A second program is used to drive a simulator with aircraft flight test data. This program has the capability of over-driving aircraft math model states or controls with those of the aircraft. For instance, the flight control system can be completely validated by using the flight test feedbacks such as pitch rate and normal acceleration and other measures as inputs to the flight controls along with input from the pilot. This way, the outputs of the flight controls such as control surface positions can be examined on a one-for-one basis with the output of the flight controls in the aircraft. If the simulator flight controls have the same inputs as the aircraft flight controls, their control surface deflection should be the same. Likewise, the aero response of the simulator can be isolated from the effect of the flight controls. This can be done by driving the aero simulation with the actual control surface deflections recorded in the flight test program and then examining the dynamic response of the simulator compared to the aircraft. The same procedure can be followed for engine validation. Final testing, consisting of pilot-in-the-loop, is performed at level four. By this time, the math model has already been validated but adjustments may have to be made to gain pilot approval. These adjustments primarily result from the limited ability of the motion and visual system to realistically simulate pilot cues. Adjustments determined necessary are satisfied by improving cuing system compensation.